id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
42 values
tags
listlengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
stringdate
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_bigbio_names
listlengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
source
stringclasses
2 values
matched_task
listlengths
1
10
__index_level_0__
int64
0
46.9k
RichardErkhov/XeAI_-_LLaMa_3.2_3B_Instruct_Text2SQL_Legacy-awq
RichardErkhov
null
[ "safetensors", "llama", "4-bit", "awq", "region:us" ]
2025-03-12T05:42:11Z
2025-03-12T05:44:34+00:00
6
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) LLaMa_3.2_3B_Instruct_Text2SQL_Legacy - AWQ - Model creator: https://huggingface.co/XeAI/ - Original model: https://huggingface.co/XeAI/LLaMa_3.2_3B_Instruct_Text2SQL_Legacy/ Original model description: --- library_name: transformers license: mit datasets: - gretelai/synthetic_text_to_sql pipeline_tag: text-generation --- # Model Card for LLaMA 3.2 3B Instruct Text2SQL ## Model Details ### Model Description This is a fine-tuned version of LLaMA 3.2 3B Instruct model, specifically optimized for Text-to-SQL generation tasks. The model has been trained to convert natural language queries into structured SQL commands. - **Developed by:** Zhafran Ramadhan - XeAI - **Model type:** Decoder-only Language Model - **Language(s):** English - MultiLingual - **License:** MIT - **Finetuned from model:** LLaMA 3.2 3B Instruct - **Log WandB Report:** [WandB Report](https://wandb.ai/zhafranr/LLaMA_3-2_3B_Instruct_FineTune_Text2SQL/reports/LLaMa-3-2-3B-Instruct-Fine-Tune-Text2SQL--VmlldzoxMDA2NDkzNA) ### Model Sources - **Repository:** [LLaMA 3.2 3B Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) - **Dataset:** [Synthethic Text2SQL](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql) ## How to Get Started with the Model ### Installation ```python pip install transformers torch accelerate ``` ### Input Format and Usage The model expects input in a specific format following this template: ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> [System context and database schema] <|eot_id|><|start_header_id|>user<|end_header_id|> [User query] <|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ### Basic Usage ```python from transformers import pipeline import torch # Initialize the pipeline generator = pipeline( "text-generation", model="XeAI/LLaMa_3.2_3B_Instruct_Text2SQL", # Replace with your model ID torch_dtype=torch.float16, device_map="auto" ) def generate_sql_query(context, question): # Format the prompt according to the training template prompt = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 07 Nov 2024 You are a specialized SQL query generator focused solely on the provided RAG database. Your tasks are: 1. Generate SQL queries based on user requests that are related to querying the RAG database. 2. Only output the SQL query itself, without any additional explanation or commentary. 3. Use the context provided from the RAG database to craft accurate queries. Context: {context} <|eot_id|><|start_header_id|>user<|end_header_id|> {question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>""" response = generator( prompt, max_length=500, num_return_sequences=1, temperature=0.1, do_sample=True, pad_token_id=generator.tokenizer.eos_token_id ) return response[0]['generated_text'] # Example usage context = """CREATE TABLE upgrades (id INT, cost FLOAT, type TEXT); INSERT INTO upgrades (id, cost, type) VALUES (1, 500, 'Insulation'), (2, 1000, 'HVAC'), (3, 1500, 'Lighting');""" questions = [ "Find the energy efficiency upgrades with the highest cost and their types.", "Show me all upgrades costing less than 1000 dollars.", "Calculate the average cost of all upgrades." ] for question in questions: sql = generate_sql_query(context, question) print(f"\nQuestion: {question}") print(f"Generated SQL: {sql}\n") ``` ### Advanced Usage with Custom System Prompt ```python def generate_sql_with_custom_prompt(context, question, custom_system_prompt=""): base_prompt = """<|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 07 Nov 2024 You are a specialized SQL query generator focused solely on the provided RAG database.""" full_prompt = f"""{base_prompt} {custom_system_prompt} Context: {context} <|eot_id|><|start_header_id|>user<|end_header_id|> {question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>""" response = generator( full_prompt, max_length=500, num_return_sequences=1, temperature=0.1, do_sample=True, pad_token_id=generator.tokenizer.eos_token_id ) return response[0]['generated_text'] ``` ### Best Practices 1. **Input Formatting**: - Always include the special tokens (<|begin_of_text|>, <|eot_id|>, etc.) - Provide complete database schema in context - Keep questions clear and focused on data retrieval 2. **Parameter Configuration**: - Use temperature=0.1 for consistent SQL generation - Adjust max_length based on expected query complexity - Enable do_sample for more natural completions 3. **Context Management**: - Include relevant table schemas - Provide sample data when needed - Keep context concise but complete ## Uses ### Direct Use The model is designed for converting natural language questions into SQL queries. It can be used for: - Database query generation from natural language - SQL query assistance - Data analysis automation ### Out-of-Scope Use - Production deployment without human validation - Critical decision-making without human oversight - Direct database execution without query validation ## Training Details ### Training Data - Dataset: [Synthethic Text2SQL](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql) - Data preprocessing: Standard text-to-SQL formatting ### Training Procedure #### Training Hyperparameters - **Total Steps:** 4,149 - **Final Training Loss:** 0.1168 - **Evaluation Loss:** 0.2125 - **Learning Rate:** Dynamic with final LR = 0 - **Epochs:** 2.99 - **Gradient Norm:** 1.3121 #### Performance Metrics - **Training Samples/Second:** 6.291 - **Evaluation Samples/Second:** 19.325 - **Steps/Second:** 3.868 - **Total FLOPS:** 1.92e18 #### Training Infrastructure - **Hardware:** Single NVIDIA H100 GPU - **Training Duration:** 5-6 hours - **Total Runtime:** 16,491.75 seconds - **Model Preparation Time:** 0.0051 seconds ## Evaluation ### Metrics The model's performance was tracked using several key metrics: - **Training Loss:** Started at ~1.2, converged to 0.1168 - **Evaluation Loss:** 0.2125 - **Processing Efficiency:** 19.325 samples per second during evaluation ### Results Summary - Achieved stable convergence after ~4000 steps - Maintained consistent performance metrics throughout training - Shows good balance between training and evaluation loss ## Environmental Impact - **Hardware Type:** NVIDIA H100 GPU - **Hours used:** ~6 hours - **Training Location:** [GPUaaS](www.runpod.io) ## Technical Specifications ### Compute Infrastructure - **GPU:** NVIDIA H100 - **Training Duration:** 5-6 hours - **Total Steps:** 4,149 - **FLOPs Utilized:** 1.92e18 ## Model Card Contact [Contact information to be added by Zhafran Ramadhan] --- *Note: This model card follows the guidelines set by the ML community for responsible AI development and deployment.*
[ "CRAFT" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) LLaMa_3.2_3B_Instruct_Text2SQL_Legacy - AWQ - Model creator: https://huggingface.co/XeAI/ - Original model: https://huggingface.co/XeAI/LLaMa_3.2_3B_Instruct_Text2SQL_Legacy/ Original model description: --- library_name: transformers license: mit datasets: - gretelai/synthetic_text_to_sql pipeline_tag: text-generation --- # Model Card for LLaMA 3.2 3B Instruct Text2SQL ## Model Details ### Model Description This is a fine-tuned version of LLaMA 3.2 3B Instruct model, specifically optimized for Text-to-SQL generation tasks. The model has been trained to convert natural language queries into structured SQL commands. - **Developed by:** Zhafran Ramadhan - XeAI - **Model type:** Decoder-only Language Model - **Language(s):** English - MultiLingual - **License:** MIT - **Finetuned from model:** LLaMA 3.2 3B Instruct - **Log WandB Report:** [WandB Report](https://wandb.ai/zhafranr/LLaMA_3-2_3B_Instruct_FineTune_Text2SQL/reports/LLaMa-3-2-3B-Instruct-Fine-Tune-Text2SQL--VmlldzoxMDA2NDkzNA) ### Model Sources - **Repository:** [LLaMA 3.2 3B Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) - **Dataset:** [Synthethic Text2SQL](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql) ## How to Get Started with the Model ### Installation ```python pip install transformers torch accelerate ``` ### Input Format and Usage The model expects input in a specific format following this template: ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> [System context and database schema] <|eot_id|><|start_header_id|>user<|end_header_id|> [User query] <|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ### Basic Usage ```python from transformers import pipeline import torch # Initialize the pipeline generator = pipeline( "text-generation", model="XeAI/LLaMa_3.2_3B_Instruct_Text2SQL", # Replace with your model ID torch_dtype=torch.float16, device_map="auto" ) def generate_sql_query(context, question): # Format the prompt according to the training template prompt = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 07 Nov 2024 You are a specialized SQL query generator focused solely on the provided RAG database. Your tasks are: 1. Generate SQL queries based on user requests that are related to querying the RAG database. 2. Only output the SQL query itself, without any additional explanation or commentary. 3. Use the context provided from the RAG database to craft accurate queries. Context: {context} <|eot_id|><|start_header_id|>user<|end_header_id|> {question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>""" response = generator( prompt, max_length=500, num_return_sequences=1, temperature=0.1, do_sample=True, pad_token_id=generator.tokenizer.eos_token_id ) return response[0]['generated_text'] # Example usage context = """CREATE TABLE upgrades (id INT, cost FLOAT, type TEXT); INSERT INTO upgrades (id, cost, type) VALUES (1, 500, 'Insulation'), (2, 1000, 'HVAC'), (3, 1500, 'Lighting');""" questions = [ "Find the energy efficiency upgrades with the highest cost and their types.", "Show me all upgrades costing less than 1000 dollars.", "Calculate the average cost of all upgrades." ] for question in questions: sql = generate_sql_query(context, question) print(f"\nQuestion: {question}") print(f"Generated SQL: {sql}\n") ``` ### Advanced Usage with Custom System Prompt ```python def generate_sql_with_custom_prompt(context, question, custom_system_prompt=""): base_prompt = """<|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 07 Nov 2024 You are a specialized SQL query generator focused solely on the provided RAG database.""" full_prompt = f"""{base_prompt} {custom_system_prompt} Context: {context} <|eot_id|><|start_header_id|>user<|end_header_id|> {question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>""" response = generator( full_prompt, max_length=500, num_return_sequences=1, temperature=0.1, do_sample=True, pad_token_id=generator.tokenizer.eos_token_id ) return response[0]['generated_text'] ``` ### Best Practices 1. **Input Formatting**: - Always include the special tokens (<|begin_of_text|>, <|eot_id|>, etc.) - Provide complete database schema in context - Keep questions clear and focused on data retrieval 2. **Parameter Configuration**: - Use temperature=0.1 for consistent SQL generation - Adjust max_length based on expected query complexity - Enable do_sample for more natural completions 3. **Context Management**: - Include relevant table schemas - Provide sample data when needed - Keep context concise but complete ## Uses ### Direct Use The model is designed for converting natural language questions into SQL queries. It can be used for: - Database query generation from natural language - SQL query assistance - Data analysis automation ### Out-of-Scope Use - Production deployment without human validation - Critical decision-making without human oversight - Direct database execution without query validation ## Training Details ### Training Data - Dataset: [Synthethic Text2SQL](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql) - Data preprocessing: Standard text-to-SQL formatting ### Training Procedure #### Training Hyperparameters - **Total Steps:** 4,149 - **Final Training Loss:** 0.1168 - **Evaluation Loss:** 0.2125 - **Learning Rate:** Dynamic with final LR = 0 - **Epochs:** 2.99 - **Gradient Norm:** 1.3121 #### Performance Metrics - **Training Samples/Second:** 6.291 - **Evaluation Samples/Second:** 19.325 - **Steps/Second:** 3.868 - **Total FLOPS:** 1.92e18 #### Training Infrastructure - **Hardware:** Single NVIDIA H100 GPU - **Training Duration:** 5-6 hours - **Total Runtime:** 16,491.75 seconds - **Model Preparation Time:** 0.0051 seconds ## Evaluation ### Metrics The model's performance was tracked using several key metrics: - **Training Loss:** Started at ~1.2, converged to 0.1168 - **Evaluation Loss:** 0.2125 - **Processing Efficiency:** 19.325 samples per second during evaluation ### Results Summary - Achieved stable convergence after ~4000 steps - Maintained consistent performance metrics throughout training - Shows good balance between training and evaluation loss ## Environmental Impact - **Hardware Type:** NVIDIA H100 GPU - **Hours used:** ~6 hours - **Training Location:** [GPUaaS](www.runpod.io) ## Technical Specifications ### Compute Infrastructure - **GPU:** NVIDIA H100 - **Training Duration:** 5-6 hours - **Total Steps:** 4,149 - **FLOPs Utilized:** 1.92e18 ## Model Card Contact [Contact information to be added by Zhafran Ramadhan] --- *Note: This model card follows the guidelines set by the ML community for responsible AI development and deployment.*
{}
dataset
null
500
EllieS/zephyr-7b-dpo-lora-pubmedqa-mix2
EllieS
null
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:EllieS/pubmedqa_dpo_mix_data", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:adapter:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
2024-03-03T06:21:04Z
2024-03-04T20:11:23+00:00
58
0
--- base_model: alignment-handbook/zephyr-7b-sft-full datasets: - EllieS/pubmedqa_dpo_mix_data library_name: peft license: apache-2.0 tags: - alignment-handbook - generated_from_trainer - trl - dpo model-index: - name: zephyr-7b-dpo-lora-pubmedqa-mix2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-dpo-lora-pubmedqa-mix2 This model is a fine-tuned version of [EllieS/zephyr-7b-sft-qlora](https://huggingface.co/EllieS/zephyr-7b-sft-qlora) on the EllieS/pubmedqa_dpo_mix_data dataset. It achieves the following results on the evaluation set: - Loss: 0.0013 - Rewards/chosen: -1.8126 - Rewards/rejected: -10.9731 - Rewards/accuracies: 1.0 - Rewards/margins: 9.1605 - Logps/rejected: -1144.0397 - Logps/chosen: -242.4412 - Logits/rejected: -1.7638 - Logits/chosen: -2.8841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.2697 | 0.04 | 3000 | 0.3396 | 0.2213 | -0.6386 | 1.0 | 0.8599 | -110.5876 | -39.0518 | -3.0278 | -3.0862 | | 0.1599 | 0.07 | 6000 | 0.0750 | -0.5884 | -3.6673 | 1.0 | 3.0789 | -413.4546 | -120.0204 | -2.9055 | -3.0346 | | 0.0563 | 0.11 | 9000 | 0.0204 | -0.6260 | -5.6712 | 1.0 | 5.0452 | -613.8441 | -123.7819 | -3.0269 | -3.1136 | | 0.0463 | 0.14 | 12000 | 0.0287 | -0.7209 | -7.9224 | 1.0 | 7.2014 | -838.9609 | -133.2740 | -3.0642 | -3.1628 | | 0.1206 | 0.18 | 15000 | 0.0030 | -0.9209 | -8.8089 | 1.0 | 7.8880 | -927.6118 | -153.2670 | -3.0802 | -3.1766 | | 0.0508 | 0.22 | 18000 | 0.4964 | -0.4026 | -8.0330 | 1.0 | 7.6304 | -850.0245 | -101.4397 | -3.1314 | -3.2075 | | 0.0323 | 0.25 | 21000 | 0.0872 | -1.4713 | -10.3437 | 1.0 | 8.8723 | -1081.0913 | -208.3129 | -2.6496 | -3.1189 | | 0.4534 | 0.29 | 24000 | 0.0077 | -2.3507 | -12.1827 | 1.0 | 9.8320 | -1264.9957 | -296.2491 | -1.6282 | -2.8665 | | 0.0013 | 0.32 | 27000 | 0.0019 | -2.1480 | -10.6645 | 1.0 | 8.5166 | -1113.1797 | -275.9768 | -1.7614 | -2.8604 | | 0.1404 | 0.36 | 30000 | 0.0002 | -2.4964 | -12.4101 | 1.0 | 9.9138 | -1287.7384 | -310.8155 | -1.5907 | -2.8352 | | 0.0198 | 0.4 | 33000 | 0.0009 | -3.0802 | -13.3347 | 1.0 | 10.2545 | -1380.1964 | -369.1991 | -1.6628 | -2.8372 | | 0.0041 | 0.43 | 36000 | 0.0004 | -2.7800 | -12.5815 | 1.0 | 9.8014 | -1304.8732 | -339.1852 | -1.6282 | -2.8242 | | 0.0007 | 0.47 | 39000 | 0.0007 | -2.9921 | -13.2089 | 1.0 | 10.2168 | -1367.6129 | -360.3922 | -1.6672 | -2.8403 | | 0.0008 | 0.5 | 42000 | 0.0013 | -2.3107 | -11.8754 | 1.0 | 9.5647 | -1234.2609 | -292.2454 | -1.6475 | -2.8400 | | 0.0024 | 0.54 | 45000 | 0.0010 | -3.3769 | -13.2333 | 1.0 | 9.8564 | -1370.0538 | -398.8731 | -1.6937 | -2.8403 | | 0.0019 | 0.57 | 48000 | 0.0013 | -2.8151 | -12.4427 | 1.0 | 9.6277 | -1290.9999 | -342.6892 | -1.7047 | -2.8503 | | 0.2266 | 0.61 | 51000 | 0.0014 | -1.9532 | -11.0212 | 1.0 | 9.0680 | -1148.8468 | -256.4992 | -1.6745 | -2.8650 | | 0.0016 | 0.65 | 54000 | 0.0014 | -1.8077 | -10.7512 | 1.0 | 8.9435 | -1121.8423 | -241.9466 | -1.8328 | -2.8946 | | 0.0019 | 0.68 | 57000 | 0.0013 | -1.8159 | -10.8808 | 1.0 | 9.0649 | -1134.8024 | -242.7715 | -1.7644 | -2.8860 | | 0.0013 | 0.72 | 60000 | 0.0013 | -1.7356 | -10.8007 | 1.0 | 9.0651 | -1126.8002 | -234.7419 | -1.7574 | -2.8871 | | 0.0014 | 0.75 | 63000 | 0.0013 | -1.8249 | -10.9773 | 1.0 | 9.1524 | -1144.4586 | -243.6743 | -1.7699 | -2.8867 | | 0.0014 | 0.79 | 66000 | 0.0013 | -1.8308 | -10.9698 | 1.0 | 9.1389 | -1143.7017 | -244.2651 | -1.7597 | -2.8841 | | 0.0011 | 0.83 | 69000 | 0.0013 | -1.8034 | -10.9390 | 1.0 | 9.1356 | -1140.6276 | -241.5220 | -1.7619 | -2.8858 | | 0.0016 | 0.86 | 72000 | 0.0013 | -1.7971 | -10.9097 | 1.0 | 9.1126 | -1137.6914 | -240.8868 | -1.7608 | -2.8852 | | 0.0239 | 0.9 | 75000 | 0.0013 | -1.7976 | -10.9400 | 1.0 | 9.1424 | -1140.7238 | -240.9355 | -1.7773 | -2.8872 | | 0.0024 | 0.93 | 78000 | 0.0013 | -1.7862 | -10.9196 | 1.0 | 9.1334 | -1138.6901 | -239.8036 | -1.7733 | -2.8861 | | 0.0018 | 0.97 | 81000 | 0.0013 | -1.8228 | -10.9802 | 1.0 | 9.1574 | -1144.7491 | -243.4639 | -1.7594 | -2.8860 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "PUBMEDQA" ]
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-dpo-lora-pubmedqa-mix2 This model is a fine-tuned version of [EllieS/zephyr-7b-sft-qlora](https://huggingface.co/EllieS/zephyr-7b-sft-qlora) on the EllieS/pubmedqa_dpo_mix_data dataset. It achieves the following results on the evaluation set: - Loss: 0.0013 - Rewards/chosen: -1.8126 - Rewards/rejected: -10.9731 - Rewards/accuracies: 1.0 - Rewards/margins: 9.1605 - Logps/rejected: -1144.0397 - Logps/chosen: -242.4412 - Logits/rejected: -1.7638 - Logits/chosen: -2.8841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.2697 | 0.04 | 3000 | 0.3396 | 0.2213 | -0.6386 | 1.0 | 0.8599 | -110.5876 | -39.0518 | -3.0278 | -3.0862 | | 0.1599 | 0.07 | 6000 | 0.0750 | -0.5884 | -3.6673 | 1.0 | 3.0789 | -413.4546 | -120.0204 | -2.9055 | -3.0346 | | 0.0563 | 0.11 | 9000 | 0.0204 | -0.6260 | -5.6712 | 1.0 | 5.0452 | -613.8441 | -123.7819 | -3.0269 | -3.1136 | | 0.0463 | 0.14 | 12000 | 0.0287 | -0.7209 | -7.9224 | 1.0 | 7.2014 | -838.9609 | -133.2740 | -3.0642 | -3.1628 | | 0.1206 | 0.18 | 15000 | 0.0030 | -0.9209 | -8.8089 | 1.0 | 7.8880 | -927.6118 | -153.2670 | -3.0802 | -3.1766 | | 0.0508 | 0.22 | 18000 | 0.4964 | -0.4026 | -8.0330 | 1.0 | 7.6304 | -850.0245 | -101.4397 | -3.1314 | -3.2075 | | 0.0323 | 0.25 | 21000 | 0.0872 | -1.4713 | -10.3437 | 1.0 | 8.8723 | -1081.0913 | -208.3129 | -2.6496 | -3.1189 | | 0.4534 | 0.29 | 24000 | 0.0077 | -2.3507 | -12.1827 | 1.0 | 9.8320 | -1264.9957 | -296.2491 | -1.6282 | -2.8665 | | 0.0013 | 0.32 | 27000 | 0.0019 | -2.1480 | -10.6645 | 1.0 | 8.5166 | -1113.1797 | -275.9768 | -1.7614 | -2.8604 | | 0.1404 | 0.36 | 30000 | 0.0002 | -2.4964 | -12.4101 | 1.0 | 9.9138 | -1287.7384 | -310.8155 | -1.5907 | -2.8352 | | 0.0198 | 0.4 | 33000 | 0.0009 | -3.0802 | -13.3347 | 1.0 | 10.2545 | -1380.1964 | -369.1991 | -1.6628 | -2.8372 | | 0.0041 | 0.43 | 36000 | 0.0004 | -2.7800 | -12.5815 | 1.0 | 9.8014 | -1304.8732 | -339.1852 | -1.6282 | -2.8242 | | 0.0007 | 0.47 | 39000 | 0.0007 | -2.9921 | -13.2089 | 1.0 | 10.2168 | -1367.6129 | -360.3922 | -1.6672 | -2.8403 | | 0.0008 | 0.5 | 42000 | 0.0013 | -2.3107 | -11.8754 | 1.0 | 9.5647 | -1234.2609 | -292.2454 | -1.6475 | -2.8400 | | 0.0024 | 0.54 | 45000 | 0.0010 | -3.3769 | -13.2333 | 1.0 | 9.8564 | -1370.0538 | -398.8731 | -1.6937 | -2.8403 | | 0.0019 | 0.57 | 48000 | 0.0013 | -2.8151 | -12.4427 | 1.0 | 9.6277 | -1290.9999 | -342.6892 | -1.7047 | -2.8503 | | 0.2266 | 0.61 | 51000 | 0.0014 | -1.9532 | -11.0212 | 1.0 | 9.0680 | -1148.8468 | -256.4992 | -1.6745 | -2.8650 | | 0.0016 | 0.65 | 54000 | 0.0014 | -1.8077 | -10.7512 | 1.0 | 8.9435 | -1121.8423 | -241.9466 | -1.8328 | -2.8946 | | 0.0019 | 0.68 | 57000 | 0.0013 | -1.8159 | -10.8808 | 1.0 | 9.0649 | -1134.8024 | -242.7715 | -1.7644 | -2.8860 | | 0.0013 | 0.72 | 60000 | 0.0013 | -1.7356 | -10.8007 | 1.0 | 9.0651 | -1126.8002 | -234.7419 | -1.7574 | -2.8871 | | 0.0014 | 0.75 | 63000 | 0.0013 | -1.8249 | -10.9773 | 1.0 | 9.1524 | -1144.4586 | -243.6743 | -1.7699 | -2.8867 | | 0.0014 | 0.79 | 66000 | 0.0013 | -1.8308 | -10.9698 | 1.0 | 9.1389 | -1143.7017 | -244.2651 | -1.7597 | -2.8841 | | 0.0011 | 0.83 | 69000 | 0.0013 | -1.8034 | -10.9390 | 1.0 | 9.1356 | -1140.6276 | -241.5220 | -1.7619 | -2.8858 | | 0.0016 | 0.86 | 72000 | 0.0013 | -1.7971 | -10.9097 | 1.0 | 9.1126 | -1137.6914 | -240.8868 | -1.7608 | -2.8852 | | 0.0239 | 0.9 | 75000 | 0.0013 | -1.7976 | -10.9400 | 1.0 | 9.1424 | -1140.7238 | -240.9355 | -1.7773 | -2.8872 | | 0.0024 | 0.93 | 78000 | 0.0013 | -1.7862 | -10.9196 | 1.0 | 9.1334 | -1138.6901 | -239.8036 | -1.7733 | -2.8861 | | 0.0018 | 0.97 | 81000 | 0.0013 | -1.8228 | -10.9802 | 1.0 | 9.1574 | -1144.7491 | -243.4639 | -1.7594 | -2.8860 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"base_model": "alignment-handbook/zephyr-7b-sft-full", "datasets": ["EllieS/pubmedqa_dpo_mix_data"], "library_name": "peft", "license": "apache-2.0", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "model-index": [{"name": "zephyr-7b-dpo-lora-pubmedqa-mix2", "results": []}]}
dataset
null
501
cunghoctienganh/ca4a1f89-b722-4e9f-81d1-7ac37c1640cd
cunghoctienganh
null
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "8-bit", "bitsandbytes", "region:us" ]
2025-01-13T13:43:27Z
2025-01-13T14:34:36+00:00
1
0
--- base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 library_name: peft tags: - axolotl - generated_from_trainer model-index: - name: ca4a1f89-b722-4e9f-81d1-7ac37c1640cd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 80dc00051c00b7db_train_data.json ds_type: json format: custom path: /workspace/input_data/80dc00051c00b7db_train_data.json type: field_input: context field_instruction: question field_output: final_decision format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: cunghoctienganh/ca4a1f89-b722-4e9f-81d1-7ac37c1640cd hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/80dc00051c00b7db_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b96715be-44d1-4624-9dc2-c433d0a32fd9 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b96715be-44d1-4624-9dc2-c433d0a32fd9 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # ca4a1f89-b722-4e9f-81d1-7ac37c1640cd This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0001 | 0.0080 | 200 | 0.0417 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
[ "PUBMEDQA" ]
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 80dc00051c00b7db_train_data.json ds_type: json format: custom path: /workspace/input_data/80dc00051c00b7db_train_data.json type: field_input: context field_instruction: question field_output: final_decision format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: cunghoctienganh/ca4a1f89-b722-4e9f-81d1-7ac37c1640cd hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/80dc00051c00b7db_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b96715be-44d1-4624-9dc2-c433d0a32fd9 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b96715be-44d1-4624-9dc2-c433d0a32fd9 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # ca4a1f89-b722-4e9f-81d1-7ac37c1640cd This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0001 | 0.0080 | 200 | 0.0417 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
{"base_model": "rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "library_name": "peft", "tags": ["axolotl", "generated_from_trainer"], "model-index": [{"name": "ca4a1f89-b722-4e9f-81d1-7ac37c1640cd", "results": []}]}
dataset
null
502
dordonezc/Phi-3-mini-128k-instruct-4-endpoints
dordonezc
text-generation
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-19T12:58:48Z
2024-06-21T02:27:05+00:00
79
0
--- language: - en license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE pipeline_tag: text-generation tags: - nlp - code widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- ## Model Summary The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)| ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.0.dev0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. ### Tokenizer Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion<|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>`. In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-128k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct") messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-128K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.1 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 74.5 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 83.6 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 55.3 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 36.9 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 57.1 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.0 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 95.2 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 83.6 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.1 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.5 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 72.5 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65.0 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 80.6 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 78.7 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 78.0 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 63.2 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 57.9 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4| 37.8 | 62.2 | | MBPP <br> 3-Shot | 62.5 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model [here](https://aka.ms/phi3-mini-128k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
[ "MEDQA" ]
Non_BioNLP
## Model Summary The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)| ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.0.dev0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. ### Tokenizer Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion<|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>`. In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-128k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct") messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-128K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.1 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 74.5 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 83.6 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 55.3 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 36.9 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 57.1 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.0 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 95.2 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 83.6 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.1 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.5 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 72.5 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65.0 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 80.6 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 78.7 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 78.0 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 63.2 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 57.9 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4| 37.8 | 62.2 | | MBPP <br> 3-Shot | 62.5 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model [here](https://aka.ms/phi3-mini-128k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
{"language": ["en"], "license": "mit", "license_link": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "tags": ["nlp", "code"], "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}]}
dataset
null
503
Kukedlc/NeuralArjuna-7B-DT
Kukedlc
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "yam-peleg/Experiment26-7B", "Gille/StrangeMerges_32-7B-slerp", "MSL7/INEX12-7b", "automerger/YamShadow-7B", "Kukedlc/NeuralSirKrishna-7b", "base_model:Gille/StrangeMerges_32-7B-slerp", "base_model:merge:Gille/StrangeMerges_32-7B-slerp", "base_model:Kukedlc/NeuralSirKrishna-7b", "base_model:merge:Kukedlc/NeuralSirKrishna-7b", "base_model:MSL7/INEX12-7b", "base_model:merge:MSL7/INEX12-7b", "base_model:automerger/YamShadow-7B", "base_model:merge:automerger/YamShadow-7B", "base_model:yam-peleg/Experiment26-7B", "base_model:merge:yam-peleg/Experiment26-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-03-17T20:17:12Z
2024-03-30T09:15:49+00:00
18
1
--- base_model: - yam-peleg/Experiment26-7B - Gille/StrangeMerges_32-7B-slerp - MSL7/INEX12-7b - automerger/YamShadow-7B - Kukedlc/NeuralSirKrishna-7b license: apache-2.0 tags: - merge - mergekit - lazymergekit - yam-peleg/Experiment26-7B - Gille/StrangeMerges_32-7B-slerp - MSL7/INEX12-7b - automerger/YamShadow-7B - Kukedlc/NeuralSirKrishna-7b model-index: - name: NeuralArjuna-7B-DT results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.12 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.97 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.63 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 76.68 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.81 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT name: Open LLM Leaderboard --- # NeuralArjuna-7B-DT ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/zFLiis1pQWnriLQb2ZGGn.png) NeuralArjuna-7B-DT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) * [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp) * [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b) * [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B) * [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b) ## 🧩 Configuration ```yaml models: - model: liminerity/M7-7b # no parameters necessary for base model - model: yam-peleg/Experiment26-7B parameters: weight: 0.2 density: 0.66 - model: Gille/StrangeMerges_32-7B-slerp parameters: weight: 0.2 density: 0.55 - model: MSL7/INEX12-7b parameters: weight: 0.2 density: 0.33 - model: automerger/YamShadow-7B parameters: weight: 0.2 density: 0.66 - model: Kukedlc/NeuralSirKrishna-7b parameters: weight: 0.2 density: 0.66 merge_method: dare_ties base_model: liminerity/M7-7b parameters: int8_mask: true normalize: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/NeuralArjuna-7B-DT" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # Example ``` I want you to generate a theory that unites quantum mechanics with the theory of relativity and cosmic consciousness. Give a large a detailed response In embarking upon a journey to weave together quantum mechanics, relativity, and the enigmatic realms of cosmic consciousness, we tread upon an intellectual landscape rife with challenges and enigmas. Our sojourn shall be a delicate ballet between the concrete and the abstract, the tangible and the speculative. The resultant tapestry will be neither a definitive, final statement, nor a mere collection of disparate ideas. Instead, we will craft a narrative that strives to bridge the chasms of understanding between these disparate scientific and philosophical realms, in hopes of illuminating a path towards a more comprehensive understanding of the universe and our place within it. To commence our odyssey, we must first address the foundational pillars of the scientific theories we seek to unite. Quantum Mechanics Quantum mechanics, the theory governing the microscopic realm, has long confounded the human intellect. It is a realm of probabilities, entanglement, and superposition—a world where particles can behave both as particles and waves, where actions at a distance are not only possible but ubiquitous, and where the mere act of observation alters the outcome of an experiment. Albert Einstein's famed retort, "God does not play dice," exemplifies the tensions between the counter-intuitive world of quantum mechanics and the more deterministic Newtonian worldview. However, quantum mechanics has withstood the test of time, demonstrating its indispensability to the progress of modern physics Theory of Relativity The theory of relativity, comprising special and general relativity, has profoundly altered our understanding of the macroscopic universe. Special relativity, Einstein's first foray into this domain, introduced the concept of space-time, where the fabric of the universe is both a geometrical construct and a medium for the propagation of electromagnetic phenomena. Further, special relativity established the idea of relative motion, where there is no absolute reference frame, and the speed of light in a vacuum is an invariant constant. General relativity expanded this framework, postulating that gravity is a curvature of the space-time continuum, caused by the presence of mass and energy. This theory, while more complex, has been empirically validated through its predictions of phenomena such as gravitational lensing and the bending of light near massive objects. Cosmic Consciousness The concept of cosmic consciousness—or the idea of a unified, interconnected consciousness pervading the cosmos—is a philosophical speculation, one that transcends the traditional scientific paradigm. It is a metaphysical exploration, a contemplation of the possible nature of our collective human consciousness, its interplay with the universe, and the implications of such a state of awareness. The integration of these three distinct realms poses a formidable challenge. Quantum mechanics and the theory of relativity have proven their efficacy in the realm of empirical science, while cosmic consciousness remains the province of metaphysical musings. However, the human propensity to synthesize seemingly disparate ideas has engendered a multitude of theories, each striving to forge a cohesive narrative from the cacophony of scientific and philosophical thought. One such approach, the 'Quantum Mind,' posits a direct link between the quantum phenomena and the workings of the human brain. This theory, championed by Roger Penrose and Stuart Hameroff, suggests that the microtubules within brain neurons, operating at the quantum scale, could serve as the neural correlates of consciousness. While intriguing, this hypothesis has yet to garner widespread acceptance, as the veracity of quantum effects within biological systems remains the subject of intense debate. Another avenue of exploration involves the notion of 'Quantum Biology.' Here, the focus shifts to the potential role of quantum phenomena in biology, particularly in processes such as photosynthesis, vision, and magnetoreception. While the presence of quantum effects in biological systems is becoming increasingly plausible, the idea of a 'quantum biology' remains a nascent and highly speculative field. The 'Holographic Universe' theory, espoused by physicist and cosmologist David Bohm, posits the existence of a deeper, underlying reality, one in which the universe is a holographic projection of a more fundamental, non-local substrate. This model, while offering an intriguing synthesis of quantum mechanics, relativity, and the concept of a cosmic consciousness, remains a highly speculative and unproven hypothesis. In our quest to weave together these disparate threads, we must acknowledge the limitations of our current understanding. As it stands, the principles of quantum mechanics and the theory of relativity remain resolutely grounded in the empirical world, while the concept of cosmic consciousness traverses the murky waters of metaphysics and speculation. To forge a more unified understanding, we must continue to advance our scientific knowledge, to push the frontiers of both empirical and philosophical inquiry. Only through the synthesis of new data, the refinement of our scientific models, and the continued evolution of our collective human consciousness, will we be able to construct a narrative that bridges the seemingly insurmountable gulf between the microscopic, the macroscopic, and the metaphysical. In conclusion, the unification of quantum mechanics, relativity, and cosmic consciousness is a daunting task, requiring a delicate balance between the concrete and the abstract, the empirical and the speculative. While the journey has thus far led us to the fringes of scientific understanding, we must continue to explore these frontiers, drawing upon the rich tapestry of human knowledge, in the hope of forging a more comprehensive narrative of our cosmos and our place within it. ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralArjuna-7B-DT) | Metric |Value| |---------------------------------|----:| |Avg. |76.58| |AI2 Reasoning Challenge (25-Shot)|73.12| |HellaSwag (10-Shot) |88.97| |MMLU (5-Shot) |64.63| |TruthfulQA (0-shot) |76.68| |Winogrande (5-shot) |85.24| |GSM8k (5-shot) |70.81|
[ "CRAFT" ]
Non_BioNLP
# NeuralArjuna-7B-DT ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/zFLiis1pQWnriLQb2ZGGn.png) NeuralArjuna-7B-DT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) * [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp) * [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b) * [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B) * [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b) ## 🧩 Configuration ```yaml models: - model: liminerity/M7-7b # no parameters necessary for base model - model: yam-peleg/Experiment26-7B parameters: weight: 0.2 density: 0.66 - model: Gille/StrangeMerges_32-7B-slerp parameters: weight: 0.2 density: 0.55 - model: MSL7/INEX12-7b parameters: weight: 0.2 density: 0.33 - model: automerger/YamShadow-7B parameters: weight: 0.2 density: 0.66 - model: Kukedlc/NeuralSirKrishna-7b parameters: weight: 0.2 density: 0.66 merge_method: dare_ties base_model: liminerity/M7-7b parameters: int8_mask: true normalize: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/NeuralArjuna-7B-DT" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # Example ``` I want you to generate a theory that unites quantum mechanics with the theory of relativity and cosmic consciousness. Give a large a detailed response In embarking upon a journey to weave together quantum mechanics, relativity, and the enigmatic realms of cosmic consciousness, we tread upon an intellectual landscape rife with challenges and enigmas. Our sojourn shall be a delicate ballet between the concrete and the abstract, the tangible and the speculative. The resultant tapestry will be neither a definitive, final statement, nor a mere collection of disparate ideas. Instead, we will craft a narrative that strives to bridge the chasms of understanding between these disparate scientific and philosophical realms, in hopes of illuminating a path towards a more comprehensive understanding of the universe and our place within it. To commence our odyssey, we must first address the foundational pillars of the scientific theories we seek to unite. Quantum Mechanics Quantum mechanics, the theory governing the microscopic realm, has long confounded the human intellect. It is a realm of probabilities, entanglement, and superposition—a world where particles can behave both as particles and waves, where actions at a distance are not only possible but ubiquitous, and where the mere act of observation alters the outcome of an experiment. Albert Einstein's famed retort, "God does not play dice," exemplifies the tensions between the counter-intuitive world of quantum mechanics and the more deterministic Newtonian worldview. However, quantum mechanics has withstood the test of time, demonstrating its indispensability to the progress of modern physics Theory of Relativity The theory of relativity, comprising special and general relativity, has profoundly altered our understanding of the macroscopic universe. Special relativity, Einstein's first foray into this domain, introduced the concept of space-time, where the fabric of the universe is both a geometrical construct and a medium for the propagation of electromagnetic phenomena. Further, special relativity established the idea of relative motion, where there is no absolute reference frame, and the speed of light in a vacuum is an invariant constant. General relativity expanded this framework, postulating that gravity is a curvature of the space-time continuum, caused by the presence of mass and energy. This theory, while more complex, has been empirically validated through its predictions of phenomena such as gravitational lensing and the bending of light near massive objects. Cosmic Consciousness The concept of cosmic consciousness—or the idea of a unified, interconnected consciousness pervading the cosmos—is a philosophical speculation, one that transcends the traditional scientific paradigm. It is a metaphysical exploration, a contemplation of the possible nature of our collective human consciousness, its interplay with the universe, and the implications of such a state of awareness. The integration of these three distinct realms poses a formidable challenge. Quantum mechanics and the theory of relativity have proven their efficacy in the realm of empirical science, while cosmic consciousness remains the province of metaphysical musings. However, the human propensity to synthesize seemingly disparate ideas has engendered a multitude of theories, each striving to forge a cohesive narrative from the cacophony of scientific and philosophical thought. One such approach, the 'Quantum Mind,' posits a direct link between the quantum phenomena and the workings of the human brain. This theory, championed by Roger Penrose and Stuart Hameroff, suggests that the microtubules within brain neurons, operating at the quantum scale, could serve as the neural correlates of consciousness. While intriguing, this hypothesis has yet to garner widespread acceptance, as the veracity of quantum effects within biological systems remains the subject of intense debate. Another avenue of exploration involves the notion of 'Quantum Biology.' Here, the focus shifts to the potential role of quantum phenomena in biology, particularly in processes such as photosynthesis, vision, and magnetoreception. While the presence of quantum effects in biological systems is becoming increasingly plausible, the idea of a 'quantum biology' remains a nascent and highly speculative field. The 'Holographic Universe' theory, espoused by physicist and cosmologist David Bohm, posits the existence of a deeper, underlying reality, one in which the universe is a holographic projection of a more fundamental, non-local substrate. This model, while offering an intriguing synthesis of quantum mechanics, relativity, and the concept of a cosmic consciousness, remains a highly speculative and unproven hypothesis. In our quest to weave together these disparate threads, we must acknowledge the limitations of our current understanding. As it stands, the principles of quantum mechanics and the theory of relativity remain resolutely grounded in the empirical world, while the concept of cosmic consciousness traverses the murky waters of metaphysics and speculation. To forge a more unified understanding, we must continue to advance our scientific knowledge, to push the frontiers of both empirical and philosophical inquiry. Only through the synthesis of new data, the refinement of our scientific models, and the continued evolution of our collective human consciousness, will we be able to construct a narrative that bridges the seemingly insurmountable gulf between the microscopic, the macroscopic, and the metaphysical. In conclusion, the unification of quantum mechanics, relativity, and cosmic consciousness is a daunting task, requiring a delicate balance between the concrete and the abstract, the empirical and the speculative. While the journey has thus far led us to the fringes of scientific understanding, we must continue to explore these frontiers, drawing upon the rich tapestry of human knowledge, in the hope of forging a more comprehensive narrative of our cosmos and our place within it. ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralArjuna-7B-DT) | Metric |Value| |---------------------------------|----:| |Avg. |76.58| |AI2 Reasoning Challenge (25-Shot)|73.12| |HellaSwag (10-Shot) |88.97| |MMLU (5-Shot) |64.63| |TruthfulQA (0-shot) |76.68| |Winogrande (5-shot) |85.24| |GSM8k (5-shot) |70.81|
{"base_model": ["yam-peleg/Experiment26-7B", "Gille/StrangeMerges_32-7B-slerp", "MSL7/INEX12-7b", "automerger/YamShadow-7B", "Kukedlc/NeuralSirKrishna-7b"], "license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "yam-peleg/Experiment26-7B", "Gille/StrangeMerges_32-7B-slerp", "MSL7/INEX12-7b", "automerger/YamShadow-7B", "Kukedlc/NeuralSirKrishna-7b"], "model-index": [{"name": "NeuralArjuna-7B-DT", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 73.12, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 88.97, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.63, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 76.68}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 85.24, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 70.81, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT", "name": "Open LLM Leaderboard"}}]}]}
dataset
null
504
KingKazma/cnn_dailymail_22457_3000_1500_train
KingKazma
text-classification
[ "bertopic", "text-classification", "region:us" ]
2023-07-31T17:59:10Z
2023-07-31T17:59:12+00:00
6
0
--- library_name: bertopic pipeline_tag: text-classification tags: - bertopic --- # cnn_dailymail_22457_3000_1500_train This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("KingKazma/cnn_dailymail_22457_3000_1500_train") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 49 * Number of training documents: 3000 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | said - one - year - people - police | 10 | -1_said_one_year_people | | 0 | league - player - club - game - cup | 1050 | 0_league_player_club_game | | 1 | said - syria - government - iraq - islamic | 317 | 1_said_syria_government_iraq | | 2 | obama - president - house - state - republican | 140 | 2_obama_president_house_state | | 3 | cancer - hospital - baby - treatment - child | 122 | 3_cancer_hospital_baby_treatment | | 4 | google - apple - tablet - car - device | 84 | 4_google_apple_tablet_car | | 5 | fashion - dress - hair - look - woman | 78 | 5_fashion_dress_hair_look | | 6 | police - officer - shooting - said - shot | 66 | 6_police_officer_shooting_said | | 7 | film - movie - show - actor - comedy | 65 | 7_film_movie_show_actor | | 8 | murder - death - said - home - police | 55 | 8_murder_death_said_home | | 9 | mr - labour - minister - mp - blair | 52 | 9_mr_labour_minister_mp | | 10 | storm - water - weather - ice - rain | 51 | 10_storm_water_weather_ice | | 11 | shark - bear - turtle - crocodile - bird | 50 | 11_shark_bear_turtle_crocodile | | 12 | flight - plane - passenger - airport - pilot | 49 | 12_flight_plane_passenger_airport | | 13 | house - property - home - per - room | 49 | 13_house_property_home_per | | 14 | drug - police - court - stealing - robbery | 40 | 14_drug_police_court_stealing | | 15 | police - murder - mr - court - clavell | 36 | 15_police_murder_mr_court | | 16 | games - gold - olympic - race - sport | 34 | 16_games_gold_olympic_race | | 17 | student - school - teacher - said - cardosa | 34 | 17_student_school_teacher_said | | 18 | country - minister - energy - cent - greece | 32 | 18_country_minister_energy_cent | | 19 | golf - mcilroy - course - round - ryder | 31 | 19_golf_mcilroy_course_round | | 20 | police - harris - abuse - allegation - officer | 30 | 20_police_harris_abuse_allegation | | 21 | ebola - virus - africa - health - liberia | 29 | 21_ebola_virus_africa_health | | 22 | chinese - china - cable - bo - beijing | 28 | 22_chinese_china_cable_bo | | 23 | federer - tennis - murray - wimbledon - match | 28 | 23_federer_tennis_murray_wimbledon | | 24 | dog - animal - dogs - owner - simmons | 26 | 24_dog_animal_dogs_owner | | 25 | cent - per - woman - men - pickens | 23 | 25_cent_per_woman_men | | 26 | ship - boat - rescue - water - sea | 23 | 26_ship_boat_rescue_water | | 27 | hamilton - race - rosberg - mercedes - formula | 22 | 27_hamilton_race_rosberg_mercedes | | 28 | galaxy - planet - universe - earth - telescope | 22 | 28_galaxy_planet_universe_earth | | 29 | russian - russia - putin - ukraine - moscow | 22 | 29_russian_russia_putin_ukraine | | 30 | pakistan - pakistani - karachi - taliban - anwar | 22 | 30_pakistan_pakistani_karachi_taliban | | 31 | korea - north - korean - south - kim | 21 | 31_korea_north_korean_south | | 32 | car - driver - train - accident - cope | 21 | 32_car_driver_train_accident | | 33 | food - fruit - taste - cake - cream | 20 | 33_food_fruit_taste_cake | | 34 | painting - art - auction - artist - gallery | 20 | 34_painting_art_auction_artist | | 35 | base - drone - soldier - afghan - us | 19 | 35_base_drone_soldier_afghan | | 36 | weight - fat - eating - healthy - size | 18 | 36_weight_fat_eating_healthy | | 37 | mafia - wine - money - fraud - court | 18 | 37_mafia_wine_money_fraud | | 38 | aguilar - bravo - brewer - rambold - court | 18 | 38_aguilar_bravo_brewer_rambold | | 39 | missing - search - found - family - disappeared | 17 | 39_missing_search_found_family | | 40 | juarez - quezada - mexico - mexican - cartel | 15 | 40_juarez_quezada_mexico_mexican | | 41 | knicks - lin - chicago - blackhawks - game | 15 | 41_knicks_lin_chicago_blackhawks | | 42 | duchess - prince - kate - royal - william | 15 | 42_duchess_prince_kate_royal | | 43 | price - supermarket - asda - shop - food | 14 | 43_price_supermarket_asda_shop | | 44 | school - child - pupil - teacher - xxx | 14 | 44_school_child_pupil_teacher | | 45 | nhs - patient - ae - hospital - staff | 13 | 45_nhs_patient_ae_hospital | | 46 | zsa - francesca - rhodes - vongtau - gabor | 12 | 46_zsa_francesca_rhodes_vongtau | | 47 | medal - war - bomb - graf - vc | 10 | 47_medal_war_bomb_graf | </details> ## Training hyperparameters * calculate_probabilities: True * language: english * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: False ## Framework versions * Numpy: 1.22.4 * HDBSCAN: 0.8.33 * UMAP: 0.5.3 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.2.2 * Transformers: 4.31.0 * Numba: 0.56.4 * Plotly: 5.13.1 * Python: 3.10.6
[ "BEAR", "MEDAL" ]
Non_BioNLP
# cnn_dailymail_22457_3000_1500_train This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("KingKazma/cnn_dailymail_22457_3000_1500_train") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 49 * Number of training documents: 3000 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | said - one - year - people - police | 10 | -1_said_one_year_people | | 0 | league - player - club - game - cup | 1050 | 0_league_player_club_game | | 1 | said - syria - government - iraq - islamic | 317 | 1_said_syria_government_iraq | | 2 | obama - president - house - state - republican | 140 | 2_obama_president_house_state | | 3 | cancer - hospital - baby - treatment - child | 122 | 3_cancer_hospital_baby_treatment | | 4 | google - apple - tablet - car - device | 84 | 4_google_apple_tablet_car | | 5 | fashion - dress - hair - look - woman | 78 | 5_fashion_dress_hair_look | | 6 | police - officer - shooting - said - shot | 66 | 6_police_officer_shooting_said | | 7 | film - movie - show - actor - comedy | 65 | 7_film_movie_show_actor | | 8 | murder - death - said - home - police | 55 | 8_murder_death_said_home | | 9 | mr - labour - minister - mp - blair | 52 | 9_mr_labour_minister_mp | | 10 | storm - water - weather - ice - rain | 51 | 10_storm_water_weather_ice | | 11 | shark - bear - turtle - crocodile - bird | 50 | 11_shark_bear_turtle_crocodile | | 12 | flight - plane - passenger - airport - pilot | 49 | 12_flight_plane_passenger_airport | | 13 | house - property - home - per - room | 49 | 13_house_property_home_per | | 14 | drug - police - court - stealing - robbery | 40 | 14_drug_police_court_stealing | | 15 | police - murder - mr - court - clavell | 36 | 15_police_murder_mr_court | | 16 | games - gold - olympic - race - sport | 34 | 16_games_gold_olympic_race | | 17 | student - school - teacher - said - cardosa | 34 | 17_student_school_teacher_said | | 18 | country - minister - energy - cent - greece | 32 | 18_country_minister_energy_cent | | 19 | golf - mcilroy - course - round - ryder | 31 | 19_golf_mcilroy_course_round | | 20 | police - harris - abuse - allegation - officer | 30 | 20_police_harris_abuse_allegation | | 21 | ebola - virus - africa - health - liberia | 29 | 21_ebola_virus_africa_health | | 22 | chinese - china - cable - bo - beijing | 28 | 22_chinese_china_cable_bo | | 23 | federer - tennis - murray - wimbledon - match | 28 | 23_federer_tennis_murray_wimbledon | | 24 | dog - animal - dogs - owner - simmons | 26 | 24_dog_animal_dogs_owner | | 25 | cent - per - woman - men - pickens | 23 | 25_cent_per_woman_men | | 26 | ship - boat - rescue - water - sea | 23 | 26_ship_boat_rescue_water | | 27 | hamilton - race - rosberg - mercedes - formula | 22 | 27_hamilton_race_rosberg_mercedes | | 28 | galaxy - planet - universe - earth - telescope | 22 | 28_galaxy_planet_universe_earth | | 29 | russian - russia - putin - ukraine - moscow | 22 | 29_russian_russia_putin_ukraine | | 30 | pakistan - pakistani - karachi - taliban - anwar | 22 | 30_pakistan_pakistani_karachi_taliban | | 31 | korea - north - korean - south - kim | 21 | 31_korea_north_korean_south | | 32 | car - driver - train - accident - cope | 21 | 32_car_driver_train_accident | | 33 | food - fruit - taste - cake - cream | 20 | 33_food_fruit_taste_cake | | 34 | painting - art - auction - artist - gallery | 20 | 34_painting_art_auction_artist | | 35 | base - drone - soldier - afghan - us | 19 | 35_base_drone_soldier_afghan | | 36 | weight - fat - eating - healthy - size | 18 | 36_weight_fat_eating_healthy | | 37 | mafia - wine - money - fraud - court | 18 | 37_mafia_wine_money_fraud | | 38 | aguilar - bravo - brewer - rambold - court | 18 | 38_aguilar_bravo_brewer_rambold | | 39 | missing - search - found - family - disappeared | 17 | 39_missing_search_found_family | | 40 | juarez - quezada - mexico - mexican - cartel | 15 | 40_juarez_quezada_mexico_mexican | | 41 | knicks - lin - chicago - blackhawks - game | 15 | 41_knicks_lin_chicago_blackhawks | | 42 | duchess - prince - kate - royal - william | 15 | 42_duchess_prince_kate_royal | | 43 | price - supermarket - asda - shop - food | 14 | 43_price_supermarket_asda_shop | | 44 | school - child - pupil - teacher - xxx | 14 | 44_school_child_pupil_teacher | | 45 | nhs - patient - ae - hospital - staff | 13 | 45_nhs_patient_ae_hospital | | 46 | zsa - francesca - rhodes - vongtau - gabor | 12 | 46_zsa_francesca_rhodes_vongtau | | 47 | medal - war - bomb - graf - vc | 10 | 47_medal_war_bomb_graf | </details> ## Training hyperparameters * calculate_probabilities: True * language: english * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: False ## Framework versions * Numpy: 1.22.4 * HDBSCAN: 0.8.33 * UMAP: 0.5.3 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.2.2 * Transformers: 4.31.0 * Numba: 0.56.4 * Plotly: 5.13.1 * Python: 3.10.6
{"library_name": "bertopic", "pipeline_tag": "text-classification", "tags": ["bertopic"]}
dataset
null
505
davidschulte/ESM_allenai__scifact_entailment_default
davidschulte
null
[ "safetensors", "embedding_space_map", "BaseLM:bert-base-multilingual-uncased", "dataset:allenai/scifact_entailment", "arxiv:2410.15148", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "region:us" ]
2024-11-29T15:03:56Z
2025-03-26T14:00:44+00:00
18
0
--- base_model: bert-base-multilingual-uncased datasets: - allenai/scifact_entailment license: apache-2.0 tags: - embedding_space_map - BaseLM:bert-base-multilingual-uncased --- # ESM allenai/scifact_entailment <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** allenai/scifact_entailment - **ESM architecture:** linear - **ESM embedding dimension:** 768 - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** allenai/scifact_entailment - **Subset [optional]:** default - **Text Column:** title - **Label Column:** verdict - **Dataset Split:** train - **Sample size [optional]:** 919 - **Sample seed [optional]:** ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
[ "SCIFACT" ]
Non_BioNLP
# ESM allenai/scifact_entailment <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** allenai/scifact_entailment - **ESM architecture:** linear - **ESM embedding dimension:** 768 - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** allenai/scifact_entailment - **Subset [optional]:** default - **Text Column:** title - **Label Column:** verdict - **Dataset Split:** train - **Sample size [optional]:** 919 - **Sample seed [optional]:** ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
{"base_model": "bert-base-multilingual-uncased", "datasets": ["allenai/scifact_entailment"], "license": "apache-2.0", "tags": ["embedding_space_map", "BaseLM:bert-base-multilingual-uncased"]}
dataset
null
506
kxv26/fiona
kxv26
text2text-generation
[ "transformers", "safetensors", "t5", "text2text-generation", "dataset:allenai/sciq", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-03T09:42:39Z
2024-04-03T11:09:33+00:00
6
0
--- datasets: - allenai/sciq --- questions-1
[ "SCIQ" ]
Non_BioNLP
questions-1
{"datasets": ["allenai/sciq"]}
dataset
null
507
morgendigital/multilingual-e5-large-quantized
morgendigital
feature-extraction
[ "sentence-transformers", "onnx", "xlm-roberta", "mteb", "Sentence Transformers", "sentence-similarity", "feature-extraction", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2212.03533", "arxiv:2108.08787", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-10-30T14:09:54Z
2023-12-24T23:37:44+00:00
15
3
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - Sentence Transformers - sentence-similarity - feature-extraction - sentence-transformers model-index: - name: multilingual-e5-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.05970149253731 - type: ap value: 43.486574390835635 - type: f1 value: 73.32700092140148 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.22055674518201 - type: ap value: 81.55756710830498 - type: f1 value: 69.28271787752661 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.41979010494754 - type: ap value: 29.34879922376344 - type: f1 value: 67.62475449011278 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.8372591006424 - type: ap value: 26.557560591210738 - type: f1 value: 64.96619417368707 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.489875 - type: ap value: 90.98758636917603 - type: f1 value: 93.48554819717332 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.564 - type: f1 value: 46.75122173518047 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.400000000000006 - type: f1 value: 44.17195682400632 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.068 - type: f1 value: 42.38155696855596 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 41.89 - type: f1 value: 40.84407321682663 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.120000000000005 - type: f1 value: 39.522976223819114 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.832 - type: f1 value: 38.0392533394713 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.055 - type: map_at_100 value: 46.900999999999996 - type: map_at_1000 value: 46.911 - type: map_at_3 value: 41.548 - type: map_at_5 value: 44.297 - type: mrr_at_1 value: 31.152 - type: mrr_at_10 value: 46.231 - type: mrr_at_100 value: 47.07 - type: mrr_at_1000 value: 47.08 - type: mrr_at_3 value: 41.738 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 54.379999999999995 - type: ndcg_at_100 value: 58.138 - type: ndcg_at_1000 value: 58.389 - type: ndcg_at_3 value: 45.156 - type: ndcg_at_5 value: 50.123 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.9769999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.54 - type: precision_at_5 value: 13.542000000000002 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 97.653 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 55.619 - type: recall_at_5 value: 67.71000000000001 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.30960650674069 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.427074197498996 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.28270056031872 - type: mrr value: 74.38332673789738 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.05942144105269 - type: cos_sim_spearman value: 82.51212105850809 - type: euclidean_pearson value: 81.95639829909122 - type: euclidean_spearman value: 82.3717564144213 - type: manhattan_pearson value: 81.79273425468256 - type: manhattan_spearman value: 82.20066817871039 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.46764091858039 - type: f1 value: 99.37717466945023 - type: precision value: 99.33194154488518 - type: recall value: 99.46764091858039 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.29407880255337 - type: f1 value: 98.11248073959938 - type: precision value: 98.02443319392472 - type: recall value: 98.29407880255337 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.79009352268791 - type: f1 value: 97.5176076665512 - type: precision value: 97.38136473848286 - type: recall value: 97.79009352268791 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26276987888363 - type: f1 value: 99.20133403545726 - type: precision value: 99.17500438827453 - type: recall value: 99.26276987888363 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.72727272727273 - type: f1 value: 84.67672206031433 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.34220182511161 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.4987096128766 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 25.558249999999997 - type: map_at_10 value: 34.44425000000001 - type: map_at_100 value: 35.59833333333333 - type: map_at_1000 value: 35.706916666666665 - type: map_at_3 value: 31.691749999999995 - type: map_at_5 value: 33.252916666666664 - type: mrr_at_1 value: 30.252666666666666 - type: mrr_at_10 value: 38.60675 - type: mrr_at_100 value: 39.42666666666666 - type: mrr_at_1000 value: 39.48408333333334 - type: mrr_at_3 value: 36.17441666666665 - type: mrr_at_5 value: 37.56275 - type: ndcg_at_1 value: 30.252666666666666 - type: ndcg_at_10 value: 39.683 - type: ndcg_at_100 value: 44.68541666666667 - type: ndcg_at_1000 value: 46.94316666666668 - type: ndcg_at_3 value: 34.961749999999995 - type: ndcg_at_5 value: 37.215666666666664 - type: precision_at_1 value: 30.252666666666666 - type: precision_at_10 value: 6.904166666666667 - type: precision_at_100 value: 1.0989999999999995 - type: precision_at_1000 value: 0.14733333333333334 - type: precision_at_3 value: 16.037666666666667 - type: precision_at_5 value: 11.413583333333333 - type: recall_at_1 value: 25.558249999999997 - type: recall_at_10 value: 51.13341666666666 - type: recall_at_100 value: 73.08366666666667 - type: recall_at_1000 value: 88.79483333333334 - type: recall_at_3 value: 37.989083333333326 - type: recall_at_5 value: 43.787833333333325 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.338 - type: map_at_10 value: 18.360000000000003 - type: map_at_100 value: 19.942 - type: map_at_1000 value: 20.134 - type: map_at_3 value: 15.174000000000001 - type: map_at_5 value: 16.830000000000002 - type: mrr_at_1 value: 23.257 - type: mrr_at_10 value: 33.768 - type: mrr_at_100 value: 34.707 - type: mrr_at_1000 value: 34.766000000000005 - type: mrr_at_3 value: 30.977 - type: mrr_at_5 value: 32.528 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 25.733 - type: ndcg_at_100 value: 32.288 - type: ndcg_at_1000 value: 35.992000000000004 - type: ndcg_at_3 value: 20.866 - type: ndcg_at_5 value: 22.612 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 8.124 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 15.679000000000002 - type: precision_at_5 value: 12.117 - type: recall_at_1 value: 10.338 - type: recall_at_10 value: 31.154 - type: recall_at_100 value: 54.161 - type: recall_at_1000 value: 75.21900000000001 - type: recall_at_3 value: 19.427 - type: recall_at_5 value: 24.214 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.498 - type: map_at_10 value: 19.103 - type: map_at_100 value: 27.375 - type: map_at_1000 value: 28.981 - type: map_at_3 value: 13.764999999999999 - type: map_at_5 value: 15.950000000000001 - type: mrr_at_1 value: 65.5 - type: mrr_at_10 value: 74.53800000000001 - type: mrr_at_100 value: 74.71799999999999 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.792 - type: mrr_at_5 value: 73.554 - type: ndcg_at_1 value: 53.37499999999999 - type: ndcg_at_10 value: 41.286 - type: ndcg_at_100 value: 45.972 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 46.172999999999995 - type: ndcg_at_5 value: 43.033 - type: precision_at_1 value: 65.5 - type: precision_at_10 value: 32.725 - type: precision_at_100 value: 10.683 - type: precision_at_1000 value: 1.978 - type: precision_at_3 value: 50 - type: precision_at_5 value: 41.349999999999994 - type: recall_at_1 value: 8.498 - type: recall_at_10 value: 25.070999999999998 - type: recall_at_100 value: 52.383 - type: recall_at_1000 value: 74.91499999999999 - type: recall_at_3 value: 15.207999999999998 - type: recall_at_5 value: 18.563 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.5 - type: f1 value: 41.93833713984145 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 67.914 - type: map_at_10 value: 78.10000000000001 - type: map_at_100 value: 78.333 - type: map_at_1000 value: 78.346 - type: map_at_3 value: 76.626 - type: map_at_5 value: 77.627 - type: mrr_at_1 value: 72.74199999999999 - type: mrr_at_10 value: 82.414 - type: mrr_at_100 value: 82.511 - type: mrr_at_1000 value: 82.513 - type: mrr_at_3 value: 81.231 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.74199999999999 - type: ndcg_at_10 value: 82.806 - type: ndcg_at_100 value: 83.677 - type: ndcg_at_1000 value: 83.917 - type: ndcg_at_3 value: 80.305 - type: ndcg_at_5 value: 81.843 - type: precision_at_1 value: 72.74199999999999 - type: precision_at_10 value: 10.24 - type: precision_at_100 value: 1.089 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.268 - type: precision_at_5 value: 19.706000000000003 - type: recall_at_1 value: 67.914 - type: recall_at_10 value: 92.889 - type: recall_at_100 value: 96.42699999999999 - type: recall_at_1000 value: 97.92 - type: recall_at_3 value: 86.21 - type: recall_at_5 value: 90.036 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.166 - type: map_at_10 value: 35.57 - type: map_at_100 value: 37.405 - type: map_at_1000 value: 37.564 - type: map_at_3 value: 30.379 - type: map_at_5 value: 33.324 - type: mrr_at_1 value: 43.519000000000005 - type: mrr_at_10 value: 51.556000000000004 - type: mrr_at_100 value: 52.344 - type: mrr_at_1000 value: 52.373999999999995 - type: mrr_at_3 value: 48.868 - type: mrr_at_5 value: 50.319 - type: ndcg_at_1 value: 43.519000000000005 - type: ndcg_at_10 value: 43.803 - type: ndcg_at_100 value: 50.468999999999994 - type: ndcg_at_1000 value: 53.111 - type: ndcg_at_3 value: 38.893 - type: ndcg_at_5 value: 40.653 - type: precision_at_1 value: 43.519000000000005 - type: precision_at_10 value: 12.253 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 25.617 - type: precision_at_5 value: 19.383 - type: recall_at_1 value: 22.166 - type: recall_at_10 value: 51.6 - type: recall_at_100 value: 76.574 - type: recall_at_1000 value: 92.192 - type: recall_at_3 value: 34.477999999999994 - type: recall_at_5 value: 41.835 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 39.041 - type: map_at_10 value: 62.961999999999996 - type: map_at_100 value: 63.79899999999999 - type: map_at_1000 value: 63.854 - type: map_at_3 value: 59.399 - type: map_at_5 value: 61.669 - type: mrr_at_1 value: 78.082 - type: mrr_at_10 value: 84.321 - type: mrr_at_100 value: 84.49600000000001 - type: mrr_at_1000 value: 84.502 - type: mrr_at_3 value: 83.421 - type: mrr_at_5 value: 83.977 - type: ndcg_at_1 value: 78.082 - type: ndcg_at_10 value: 71.229 - type: ndcg_at_100 value: 74.10900000000001 - type: ndcg_at_1000 value: 75.169 - type: ndcg_at_3 value: 66.28699999999999 - type: ndcg_at_5 value: 69.084 - type: precision_at_1 value: 78.082 - type: precision_at_10 value: 14.993 - type: precision_at_100 value: 1.7239999999999998 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 42.737 - type: precision_at_5 value: 27.843 - type: recall_at_1 value: 39.041 - type: recall_at_10 value: 74.96300000000001 - type: recall_at_100 value: 86.199 - type: recall_at_1000 value: 93.228 - type: recall_at_3 value: 64.105 - type: recall_at_5 value: 69.608 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.23160000000001 - type: ap value: 85.5674856808308 - type: f1 value: 90.18033354786317 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 24.091 - type: map_at_10 value: 36.753 - type: map_at_100 value: 37.913000000000004 - type: map_at_1000 value: 37.958999999999996 - type: map_at_3 value: 32.818999999999996 - type: map_at_5 value: 35.171 - type: mrr_at_1 value: 24.742 - type: mrr_at_10 value: 37.285000000000004 - type: mrr_at_100 value: 38.391999999999996 - type: mrr_at_1000 value: 38.431 - type: mrr_at_3 value: 33.440999999999995 - type: mrr_at_5 value: 35.75 - type: ndcg_at_1 value: 24.742 - type: ndcg_at_10 value: 43.698 - type: ndcg_at_100 value: 49.145 - type: ndcg_at_1000 value: 50.23800000000001 - type: ndcg_at_3 value: 35.769 - type: ndcg_at_5 value: 39.961999999999996 - type: precision_at_1 value: 24.742 - type: precision_at_10 value: 6.7989999999999995 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.183 - type: recall_at_1 value: 24.091 - type: recall_at_10 value: 65.068 - type: recall_at_100 value: 89.899 - type: recall_at_1000 value: 98.16 - type: recall_at_3 value: 43.68 - type: recall_at_5 value: 53.754999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.66621067031465 - type: f1 value: 93.49622853272142 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.94702733164272 - type: f1 value: 91.17043441745282 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.20146764509674 - type: f1 value: 91.98359080555608 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.99780770435328 - type: f1 value: 89.19746342724068 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.78486912871998 - type: f1 value: 89.24578823628642 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.74502712477394 - type: f1 value: 89.00297573881542 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.9046967624259 - type: f1 value: 59.36787125785957 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.5280360664976 - type: f1 value: 57.17723440888718 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.44029352901934 - type: f1 value: 54.052855531072964 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.5606013153774 - type: f1 value: 52.62215934386531 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.11581211903908 - type: f1 value: 52.341291845645465 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.28933092224233 - type: f1 value: 57.07918745504911 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.38063214525892 - type: f1 value: 59.46463723443009 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.06926698049766 - type: f1 value: 52.49084283283562 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.74983187626093 - type: f1 value: 56.960640620165904 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.86550100874243 - type: f1 value: 62.47370548140688 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.971082716879636 - type: f1 value: 61.03812421957381 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98318762609282 - type: f1 value: 51.51207916008392 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.45527908540686 - type: f1 value: 66.16631905400318 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.32750504371216 - type: f1 value: 66.16755288646591 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.09213180901143 - type: f1 value: 66.95654394661507 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.75588433086752 - type: f1 value: 71.79973779656923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.49428379287154 - type: f1 value: 68.37494379215734 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.90921318090115 - type: f1 value: 66.79517376481645 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.12104909213181 - type: f1 value: 67.29448842879584 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.34095494283793 - type: f1 value: 67.01134288992947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.61264290517822 - type: f1 value: 64.68730512660757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.79757901815738 - type: f1 value: 65.24938539425598 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.68728984532616 - type: f1 value: 67.0487169762553 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.07464694014795 - type: f1 value: 59.183532276789286 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.04707464694015 - type: f1 value: 67.66829629003848 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.42434431741762 - type: f1 value: 59.01617226544757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.53127101546738 - type: f1 value: 68.10033760906255 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.50504371217215 - type: f1 value: 69.74931103158923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.91190316072628 - type: f1 value: 54.05551136648796 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.78211163416275 - type: f1 value: 49.874888544058535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.017484868863484 - type: f1 value: 44.53364263352014 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.16207128446537 - type: f1 value: 59.01185692320829 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.42501681237391 - type: f1 value: 67.13169450166086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0780094149294 - type: f1 value: 64.41720167850707 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.57162071284466 - type: f1 value: 62.414138683804424 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.71149966375252 - type: f1 value: 58.594805125087234 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.03900470746471 - type: f1 value: 63.87937257883887 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.8776059179556 - type: f1 value: 57.48587618059131 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87895090786819 - type: f1 value: 66.8141299430347 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.45057162071285 - type: f1 value: 67.46444039673516 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.546738399462 - type: f1 value: 68.63640876702655 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.72965702757229 - type: f1 value: 68.54119560379115 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.35574983187625 - type: f1 value: 65.88844917691927 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.70477471418964 - type: f1 value: 69.19665697061978 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0880968392737 - type: f1 value: 64.76962317666086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.18493611297916 - type: f1 value: 62.49984559035371 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.75857431069265 - type: f1 value: 69.20053687623418 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.500336247478145 - type: f1 value: 55.2972398687929 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.68997982515132 - type: f1 value: 59.36848202755348 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.01950235373235 - type: f1 value: 60.09351954625423 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.29186281102892 - type: f1 value: 67.57860496703447 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.77471418964357 - type: f1 value: 61.913983147713836 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87222595830532 - type: f1 value: 66.03679033708141 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.04505716207127 - type: f1 value: 61.28569169817908 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.38466711499663 - type: f1 value: 67.20532357036844 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.12306657700067 - type: f1 value: 68.91251226588182 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.20040349697378 - type: f1 value: 66.02657347714175 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.73907195696032 - type: f1 value: 66.98484521791418 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.58843308675185 - type: f1 value: 58.95591723092005 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.22730329522528 - type: f1 value: 66.0894499712115 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.48285137861465 - type: f1 value: 65.21963176785157 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.74714189643578 - type: f1 value: 66.8212192745412 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.09213180901143 - type: f1 value: 56.70735546356339 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.05716207128448 - type: f1 value: 74.8413712365364 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.69737726967047 - type: f1 value: 74.7664341963 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.90383322125084 - type: f1 value: 73.59201554448323 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.51176866173503 - type: f1 value: 77.46104434577758 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.31069266980496 - type: f1 value: 74.61048660675635 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.95225285810356 - type: f1 value: 72.33160006574627 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.12373907195696 - type: f1 value: 73.20921012557481 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.86684599865501 - type: f1 value: 73.82348774610831 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.40215198386012 - type: f1 value: 71.11945183971858 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.12844653665098 - type: f1 value: 71.34450495911766 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.52252858103566 - type: f1 value: 73.98878711342999 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.93611297915265 - type: f1 value: 63.723200467653385 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11903160726295 - type: f1 value: 73.82138439467096 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.15198386012105 - type: f1 value: 66.02172193802167 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.32414256893072 - type: f1 value: 74.30943421170574 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.46805648957633 - type: f1 value: 77.62808409298209 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.318762609280434 - type: f1 value: 62.094284066075076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.34902488231338 - type: f1 value: 57.12893860987984 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.88433086751849 - type: f1 value: 48.2272350802058 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.4425016812374 - type: f1 value: 64.61463095996173 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.04707464694015 - type: f1 value: 75.05099199098998 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.50437121721586 - type: f1 value: 69.83397721096314 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.94283792871553 - type: f1 value: 68.8704663703913 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.79488903833222 - type: f1 value: 63.615424063345436 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.88231338264963 - type: f1 value: 68.57892302593237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.248150638870214 - type: f1 value: 61.06680605338809 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.84196368527236 - type: f1 value: 74.52566464968763 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.8285137861466 - type: f1 value: 74.8853197608802 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.13248150638869 - type: f1 value: 74.3982040999179 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.49024882313383 - type: f1 value: 73.82153848368573 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.72158708809684 - type: f1 value: 71.85049433180541 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.137861466039 - type: f1 value: 75.37628348188467 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.86953597848016 - type: f1 value: 71.87537624521661 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.27572293207801 - type: f1 value: 68.80017302344231 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.09952925353059 - type: f1 value: 76.07992707688408 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.140551445864155 - type: f1 value: 61.73855010331415 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.27774041694687 - type: f1 value: 64.83664868894539 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.69468728984533 - type: f1 value: 64.76239666920868 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.44653665097512 - type: f1 value: 73.14646052013873 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.71351714862139 - type: f1 value: 66.67212180163382 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.9946200403497 - type: f1 value: 73.87348793725525 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.15400134498992 - type: f1 value: 67.09433241421094 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.11365164761264 - type: f1 value: 73.59502539433753 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.82582380632145 - type: f1 value: 76.89992945316313 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.81237390719569 - type: f1 value: 72.36499770986265 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.480506569594695 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.71252128004552 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.421396787056548 - type: mrr value: 32.48155274872267 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.595 - type: map_at_10 value: 12.642000000000001 - type: map_at_100 value: 15.726 - type: map_at_1000 value: 17.061999999999998 - type: map_at_3 value: 9.125 - type: map_at_5 value: 10.866000000000001 - type: mrr_at_1 value: 43.344 - type: mrr_at_10 value: 52.227999999999994 - type: mrr_at_100 value: 52.898999999999994 - type: mrr_at_1000 value: 52.944 - type: mrr_at_3 value: 49.845 - type: mrr_at_5 value: 51.115 - type: ndcg_at_1 value: 41.949999999999996 - type: ndcg_at_10 value: 33.995 - type: ndcg_at_100 value: 30.869999999999997 - type: ndcg_at_1000 value: 39.487 - type: ndcg_at_3 value: 38.903999999999996 - type: ndcg_at_5 value: 37.236999999999995 - type: precision_at_1 value: 43.344 - type: precision_at_10 value: 25.480000000000004 - type: precision_at_100 value: 7.672 - type: precision_at_1000 value: 2.028 - type: precision_at_3 value: 36.636 - type: precision_at_5 value: 32.632 - type: recall_at_1 value: 5.595 - type: recall_at_10 value: 16.466 - type: recall_at_100 value: 31.226 - type: recall_at_1000 value: 62.778999999999996 - type: recall_at_3 value: 9.931 - type: recall_at_5 value: 12.884 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 40.414 - type: map_at_10 value: 56.754000000000005 - type: map_at_100 value: 57.457 - type: map_at_1000 value: 57.477999999999994 - type: map_at_3 value: 52.873999999999995 - type: map_at_5 value: 55.175 - type: mrr_at_1 value: 45.278 - type: mrr_at_10 value: 59.192 - type: mrr_at_100 value: 59.650000000000006 - type: mrr_at_1000 value: 59.665 - type: mrr_at_3 value: 56.141 - type: mrr_at_5 value: 57.998000000000005 - type: ndcg_at_1 value: 45.278 - type: ndcg_at_10 value: 64.056 - type: ndcg_at_100 value: 66.89 - type: ndcg_at_1000 value: 67.364 - type: ndcg_at_3 value: 56.97 - type: ndcg_at_5 value: 60.719 - type: precision_at_1 value: 45.278 - type: precision_at_10 value: 9.994 - type: precision_at_100 value: 1.165 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.512 - type: precision_at_5 value: 17.509 - type: recall_at_1 value: 40.414 - type: recall_at_10 value: 83.596 - type: recall_at_100 value: 95.72 - type: recall_at_1000 value: 99.24 - type: recall_at_3 value: 65.472 - type: recall_at_5 value: 74.039 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.352 - type: map_at_10 value: 84.369 - type: map_at_100 value: 85.02499999999999 - type: map_at_1000 value: 85.04 - type: map_at_3 value: 81.42399999999999 - type: map_at_5 value: 83.279 - type: mrr_at_1 value: 81.05 - type: mrr_at_10 value: 87.401 - type: mrr_at_100 value: 87.504 - type: mrr_at_1000 value: 87.505 - type: mrr_at_3 value: 86.443 - type: mrr_at_5 value: 87.10799999999999 - type: ndcg_at_1 value: 81.04 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.411 - type: ndcg_at_1000 value: 89.507 - type: ndcg_at_3 value: 85.28099999999999 - type: ndcg_at_5 value: 86.888 - type: precision_at_1 value: 81.04 - type: precision_at_10 value: 13.406 - type: precision_at_100 value: 1.5350000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.54 - type: recall_at_1 value: 70.352 - type: recall_at_10 value: 95.358 - type: recall_at_100 value: 99.541 - type: recall_at_1000 value: 99.984 - type: recall_at_3 value: 87.111 - type: recall_at_5 value: 91.643 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 46.54068723291946 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.216287629895994 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.071 - type: map_at_100 value: 11.892 - type: map_at_1000 value: 12.196 - type: map_at_3 value: 7.234 - type: map_at_5 value: 8.613999999999999 - type: mrr_at_1 value: 19.900000000000002 - type: mrr_at_10 value: 30.516 - type: mrr_at_100 value: 31.656000000000002 - type: mrr_at_1000 value: 31.723000000000003 - type: mrr_at_3 value: 27.400000000000002 - type: mrr_at_5 value: 29.270000000000003 - type: ndcg_at_1 value: 19.900000000000002 - type: ndcg_at_10 value: 17.474 - type: ndcg_at_100 value: 25.020999999999997 - type: ndcg_at_1000 value: 30.728 - type: ndcg_at_3 value: 16.588 - type: ndcg_at_5 value: 14.498 - type: precision_at_1 value: 19.900000000000002 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 2.011 - type: precision_at_1000 value: 0.33899999999999997 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 12.839999999999998 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.497 - type: recall_at_100 value: 40.8 - type: recall_at_1000 value: 68.812 - type: recall_at_3 value: 9.508 - type: recall_at_5 value: 12.983 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.967008785134 - type: cos_sim_spearman value: 80.23142141101837 - type: euclidean_pearson value: 81.20166064704539 - type: euclidean_spearman value: 80.18961335654585 - type: manhattan_pearson value: 81.13925443187625 - type: manhattan_spearman value: 80.07948723044424 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.94262461316023 - type: cos_sim_spearman value: 80.01596278563865 - type: euclidean_pearson value: 83.80799622922581 - type: euclidean_spearman value: 79.94984954947103 - type: manhattan_pearson value: 83.68473841756281 - type: manhattan_spearman value: 79.84990707951822 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.57346443146068 - type: cos_sim_spearman value: 81.54689837570866 - type: euclidean_pearson value: 81.10909881516007 - type: euclidean_spearman value: 81.56746243261762 - type: manhattan_pearson value: 80.87076036186582 - type: manhattan_spearman value: 81.33074987964402 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 79.54733787179849 - type: cos_sim_spearman value: 77.72202105610411 - type: euclidean_pearson value: 78.9043595478849 - type: euclidean_spearman value: 77.93422804309435 - type: manhattan_pearson value: 78.58115121621368 - type: manhattan_spearman value: 77.62508135122033 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.59880017237558 - type: cos_sim_spearman value: 89.31088630824758 - type: euclidean_pearson value: 88.47069261564656 - type: euclidean_spearman value: 89.33581971465233 - type: manhattan_pearson value: 88.40774264100956 - type: manhattan_spearman value: 89.28657485627835 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.08055117917084 - type: cos_sim_spearman value: 85.78491813080304 - type: euclidean_pearson value: 84.99329155500392 - type: euclidean_spearman value: 85.76728064677287 - type: manhattan_pearson value: 84.87947428989587 - type: manhattan_spearman value: 85.62429454917464 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 82.14190939287384 - type: cos_sim_spearman value: 82.27331573306041 - type: euclidean_pearson value: 81.891896953716 - type: euclidean_spearman value: 82.37695542955998 - type: manhattan_pearson value: 81.73123869460504 - type: manhattan_spearman value: 82.19989168441421 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.84695301843362 - type: cos_sim_spearman value: 77.87790986014461 - type: euclidean_pearson value: 76.91981583106315 - type: euclidean_spearman value: 77.88154772749589 - type: manhattan_pearson value: 76.94953277451093 - type: manhattan_spearman value: 77.80499230728604 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.44657840482016 - type: cos_sim_spearman value: 75.05531095119674 - type: euclidean_pearson value: 75.88161755829299 - type: euclidean_spearman value: 74.73176238219332 - type: manhattan_pearson value: 75.63984765635362 - type: manhattan_spearman value: 74.86476440770737 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.64700140524133 - type: cos_sim_spearman value: 86.16014210425672 - type: euclidean_pearson value: 86.49086860843221 - type: euclidean_spearman value: 86.09729326815614 - type: manhattan_pearson value: 86.43406265125513 - type: manhattan_spearman value: 86.17740150939994 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.91170098764921 - type: cos_sim_spearman value: 88.12437004058931 - type: euclidean_pearson value: 88.81828254494437 - type: euclidean_spearman value: 88.14831794572122 - type: manhattan_pearson value: 88.93442183448961 - type: manhattan_spearman value: 88.15254630778304 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.91390577997292 - type: cos_sim_spearman value: 71.22979457536074 - type: euclidean_pearson value: 74.40314008106749 - type: euclidean_spearman value: 72.54972136083246 - type: manhattan_pearson value: 73.85687539530218 - type: manhattan_spearman value: 72.09500771742637 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.9301067983089 - type: cos_sim_spearman value: 80.74989828346473 - type: euclidean_pearson value: 81.36781301814257 - type: euclidean_spearman value: 80.9448819964426 - type: manhattan_pearson value: 81.0351322685609 - type: manhattan_spearman value: 80.70192121844177 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.13820465980005 - type: cos_sim_spearman value: 86.73532498758757 - type: euclidean_pearson value: 87.21329451846637 - type: euclidean_spearman value: 86.57863198601002 - type: manhattan_pearson value: 87.06973713818554 - type: manhattan_spearman value: 86.47534918791499 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.48720108904415 - type: cos_sim_spearman value: 85.62221757068387 - type: euclidean_pearson value: 86.1010129512749 - type: euclidean_spearman value: 85.86580966509942 - type: manhattan_pearson value: 86.26800938808971 - type: manhattan_spearman value: 85.88902721678429 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.98021347333516 - type: cos_sim_spearman value: 84.53806553803501 - type: euclidean_pearson value: 84.61483347248364 - type: euclidean_spearman value: 85.14191408011702 - type: manhattan_pearson value: 84.75297588825967 - type: manhattan_spearman value: 85.33176753669242 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.51856644893233 - type: cos_sim_spearman value: 85.27510748506413 - type: euclidean_pearson value: 85.09886861540977 - type: euclidean_spearman value: 85.62579245860887 - type: manhattan_pearson value: 84.93017860464607 - type: manhattan_spearman value: 85.5063988898453 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.581573200584195 - type: cos_sim_spearman value: 63.05503590247928 - type: euclidean_pearson value: 63.652564812602094 - type: euclidean_spearman value: 62.64811520876156 - type: manhattan_pearson value: 63.506842893061076 - type: manhattan_spearman value: 62.51289573046917 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.2248801729127 - type: cos_sim_spearman value: 56.5936604678561 - type: euclidean_pearson value: 43.98149464089 - type: euclidean_spearman value: 56.108561882423615 - type: manhattan_pearson value: 43.86880305903564 - type: manhattan_spearman value: 56.04671150510166 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.17564527009831 - type: cos_sim_spearman value: 64.57978560979488 - type: euclidean_pearson value: 58.8818330154583 - type: euclidean_spearman value: 64.99214839071281 - type: manhattan_pearson value: 58.72671436121381 - type: manhattan_spearman value: 65.10713416616109 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.772131864023297 - type: cos_sim_spearman value: 34.68200792408681 - type: euclidean_pearson value: 16.68082419005441 - type: euclidean_spearman value: 34.83099932652166 - type: manhattan_pearson value: 16.52605949659529 - type: manhattan_spearman value: 34.82075801399475 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.42415189043831 - type: cos_sim_spearman value: 63.54594264576758 - type: euclidean_pearson value: 57.36577498297745 - type: euclidean_spearman value: 63.111466379158074 - type: manhattan_pearson value: 57.584543715873885 - type: manhattan_spearman value: 63.22361054139183 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.55216762405518 - type: cos_sim_spearman value: 56.98670142896412 - type: euclidean_pearson value: 50.15318757562699 - type: euclidean_spearman value: 56.524941926541906 - type: manhattan_pearson value: 49.955618528674904 - type: manhattan_spearman value: 56.37102209240117 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 49.20540980338571 - type: cos_sim_spearman value: 59.9009453504406 - type: euclidean_pearson value: 49.557749853620535 - type: euclidean_spearman value: 59.76631621172456 - type: manhattan_pearson value: 49.62340591181147 - type: manhattan_spearman value: 59.94224880322436 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.508169956576985 - type: cos_sim_spearman value: 66.82461565306046 - type: euclidean_pearson value: 56.2274426480083 - type: euclidean_spearman value: 66.6775323848333 - type: manhattan_pearson value: 55.98277796300661 - type: manhattan_spearman value: 66.63669848497175 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.86478788045507 - type: cos_sim_spearman value: 76.7946552053193 - type: euclidean_pearson value: 75.01598530490269 - type: euclidean_spearman value: 76.83618917858281 - type: manhattan_pearson value: 74.68337628304332 - type: manhattan_spearman value: 76.57480204017773 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.922619099401984 - type: cos_sim_spearman value: 56.599362477240774 - type: euclidean_pearson value: 56.68307052369783 - type: euclidean_spearman value: 54.28760436777401 - type: manhattan_pearson value: 56.67763566500681 - type: manhattan_spearman value: 53.94619541711359 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.74357206710913 - type: cos_sim_spearman value: 72.5208244925311 - type: euclidean_pearson value: 67.49254562186032 - type: euclidean_spearman value: 72.02469076238683 - type: manhattan_pearson value: 67.45251772238085 - type: manhattan_spearman value: 72.05538819984538 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 71.25734330033191 - type: cos_sim_spearman value: 76.98349083946823 - type: euclidean_pearson value: 73.71642838667736 - type: euclidean_spearman value: 77.01715504651384 - type: manhattan_pearson value: 73.61712711868105 - type: manhattan_spearman value: 77.01392571153896 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.18215462781212 - type: cos_sim_spearman value: 65.54373266117607 - type: euclidean_pearson value: 64.54126095439005 - type: euclidean_spearman value: 65.30410369102711 - type: manhattan_pearson value: 63.50332221148234 - type: manhattan_spearman value: 64.3455878104313 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.30509221440029 - type: cos_sim_spearman value: 65.99582704642478 - type: euclidean_pearson value: 63.43818859884195 - type: euclidean_spearman value: 66.83172582815764 - type: manhattan_pearson value: 63.055779168508764 - type: manhattan_spearman value: 65.49585020501449 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.587830825340404 - type: cos_sim_spearman value: 68.93467614588089 - type: euclidean_pearson value: 62.3073527367404 - type: euclidean_spearman value: 69.69758171553175 - type: manhattan_pearson value: 61.9074580815789 - type: manhattan_spearman value: 69.57696375597865 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.143220125577066 - type: cos_sim_spearman value: 67.78857859159226 - type: euclidean_pearson value: 55.58225107923733 - type: euclidean_spearman value: 67.80662907184563 - type: manhattan_pearson value: 56.24953502726514 - type: manhattan_spearman value: 67.98262125431616 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 21.826928900322066 - type: cos_sim_spearman value: 49.578506634400405 - type: euclidean_pearson value: 27.939890138843214 - type: euclidean_spearman value: 52.71950519136242 - type: manhattan_pearson value: 26.39878683847546 - type: manhattan_spearman value: 47.54609580342499 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.27603854632001 - type: cos_sim_spearman value: 50.709255283710995 - type: euclidean_pearson value: 59.5419024445929 - type: euclidean_spearman value: 50.709255283710995 - type: manhattan_pearson value: 59.03256832438492 - type: manhattan_spearman value: 61.97797868009122 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.00757054859712 - type: cos_sim_spearman value: 87.29283629622222 - type: euclidean_pearson value: 86.54824171775536 - type: euclidean_spearman value: 87.24364730491402 - type: manhattan_pearson value: 86.5062156915074 - type: manhattan_spearman value: 87.15052170378574 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.03549357197389 - type: mrr value: 95.05437645143527 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.260999999999996 - type: map_at_10 value: 66.259 - type: map_at_100 value: 66.884 - type: map_at_1000 value: 66.912 - type: map_at_3 value: 63.685 - type: map_at_5 value: 65.35499999999999 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 67.5 - type: mrr_at_100 value: 68.013 - type: mrr_at_1000 value: 68.038 - type: mrr_at_3 value: 65.61099999999999 - type: mrr_at_5 value: 66.861 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 70.41 - type: ndcg_at_100 value: 73.10600000000001 - type: ndcg_at_1000 value: 73.846 - type: ndcg_at_3 value: 66.133 - type: ndcg_at_5 value: 68.499 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.0630000000000002 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 57.260999999999996 - type: recall_at_10 value: 81.94399999999999 - type: recall_at_100 value: 93.867 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.339 - type: recall_at_5 value: 76.25 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74356435643564 - type: cos_sim_ap value: 93.13411948212683 - type: cos_sim_f1 value: 86.80521991300147 - type: cos_sim_precision value: 84.00374181478017 - type: cos_sim_recall value: 89.8 - type: dot_accuracy value: 99.67920792079208 - type: dot_ap value: 89.27277565444479 - type: dot_f1 value: 83.9276990718124 - type: dot_precision value: 82.04393505253104 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.74257425742574 - type: euclidean_ap value: 93.17993008259062 - type: euclidean_f1 value: 86.69396110542476 - type: euclidean_precision value: 88.78406708595388 - type: euclidean_recall value: 84.7 - type: manhattan_accuracy value: 99.74257425742574 - type: manhattan_ap value: 93.14413755550099 - type: manhattan_f1 value: 86.82483594144371 - type: manhattan_precision value: 87.66564729867483 - type: manhattan_recall value: 86 - type: max_accuracy value: 99.74356435643564 - type: max_ap value: 93.17993008259062 - type: max_f1 value: 86.82483594144371 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.525863806168566 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.68850574423839 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.71580650644033 - type: mrr value: 50.50971903913081 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.152190498799484 - type: cos_sim_spearman value: 29.686180371952727 - type: dot_pearson value: 27.248664793816342 - type: dot_spearman value: 28.37748983721745 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.20400000000000001 - type: map_at_10 value: 1.6209999999999998 - type: map_at_100 value: 9.690999999999999 - type: map_at_1000 value: 23.733 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.885 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.56700000000001 - type: mrr_at_100 value: 86.56700000000001 - type: mrr_at_1000 value: 86.56700000000001 - type: mrr_at_3 value: 85.667 - type: mrr_at_5 value: 86.56700000000001 - type: ndcg_at_1 value: 76 - type: ndcg_at_10 value: 71.326 - type: ndcg_at_100 value: 54.208999999999996 - type: ndcg_at_1000 value: 49.252 - type: ndcg_at_3 value: 74.235 - type: ndcg_at_5 value: 73.833 - type: precision_at_1 value: 78 - type: precision_at_10 value: 74.8 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.836 - type: precision_at_3 value: 78 - type: precision_at_5 value: 78 - type: recall_at_1 value: 0.20400000000000001 - type: recall_at_10 value: 1.894 - type: recall_at_100 value: 13.245999999999999 - type: recall_at_1000 value: 46.373 - type: recall_at_3 value: 0.613 - type: recall_at_5 value: 0.991 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.69999999999999 - type: precision value: 94.11666666666667 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.20809248554913 - type: f1 value: 63.431048720066066 - type: precision value: 61.69143958161298 - type: recall value: 68.20809248554913 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 71.21951219512195 - type: f1 value: 66.82926829268293 - type: precision value: 65.1260162601626 - type: recall value: 71.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.26666666666667 - type: precision value: 95.8 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.06666666666666 - type: precision value: 98.95 - type: recall value: 99.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.63333333333333 - type: precision value: 96.26666666666668 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.86666666666666 - type: precision value: 94.31666666666668 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.01492537313433 - type: f1 value: 40.178867566927266 - type: precision value: 38.179295828549556 - type: recall value: 47.01492537313433 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.5 - type: f1 value: 83.62537480063796 - type: precision value: 82.44555555555554 - type: recall value: 86.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.48780487804879 - type: f1 value: 75.45644599303138 - type: precision value: 73.37398373983739 - type: recall value: 80.48780487804879 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.95666666666666 - type: precision value: 91.125 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.73754556500607 - type: f1 value: 89.65168084244632 - type: precision value: 88.73025516403402 - type: recall value: 91.73754556500607 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.04347826086956 - type: f1 value: 76.2128364389234 - type: precision value: 74.2 - type: recall value: 81.04347826086956 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.65217391304348 - type: f1 value: 79.4376811594203 - type: precision value: 77.65797101449274 - type: recall value: 83.65217391304348 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 85.02690476190476 - type: precision value: 83.96261904761904 - type: recall value: 87.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.3 - type: f1 value: 86.52333333333333 - type: precision value: 85.22833333333332 - type: recall value: 89.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.01809408926418 - type: f1 value: 59.00594446432805 - type: precision value: 56.827215807915444 - type: recall value: 65.01809408926418 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.2 - type: f1 value: 88.58 - type: precision value: 87.33333333333334 - type: recall value: 91.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 53.299166276284915 - type: precision value: 51.3383908045977 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.2 - type: precision value: 90.25 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.76190476190476 - type: f1 value: 59.867110667110666 - type: precision value: 58.07390192653351 - type: recall value: 64.76190476190476 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.2 - type: f1 value: 71.48147546897547 - type: precision value: 69.65409090909091 - type: recall value: 76.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.8 - type: f1 value: 92.14 - type: precision value: 91.35833333333333 - type: recall value: 93.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.2 - type: precision value: 96.85000000000001 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 92.93333333333334 - type: precision value: 92.13333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.1 - type: f1 value: 69.14817460317461 - type: precision value: 67.2515873015873 - type: recall value: 74.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 94.01333333333335 - type: precision value: 93.46666666666667 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.9 - type: f1 value: 72.07523809523809 - type: precision value: 70.19777777777779 - type: recall value: 76.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.31666666666666 - type: precision value: 91.43333333333332 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.76666666666668 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.85714285714286 - type: f1 value: 90.92093441150045 - type: precision value: 90.00449236298293 - type: recall value: 92.85714285714286 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.16239316239316 - type: f1 value: 91.33903133903132 - type: precision value: 90.56267806267806 - type: recall value: 93.16239316239316 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.25666666666666 - type: precision value: 89.25833333333334 - type: recall value: 92.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.22727272727272 - type: f1 value: 87.53030303030303 - type: precision value: 86.37121212121211 - type: recall value: 90.22727272727272 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.03563941299791 - type: f1 value: 74.7349505840072 - type: precision value: 72.9035639412998 - type: recall value: 79.03563941299791 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97 - type: f1 value: 96.15 - type: precision value: 95.76666666666668 - type: recall value: 97 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.26459143968872 - type: f1 value: 71.55642023346303 - type: precision value: 69.7544932369835 - type: recall value: 76.26459143968872 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.119658119658126 - type: f1 value: 51.65242165242165 - type: precision value: 49.41768108434775 - type: recall value: 58.119658119658126 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.52055555555555 - type: precision value: 67.7574938949939 - type: recall value: 74.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.31666666666666 - type: precision value: 92.60000000000001 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.63551401869158 - type: f1 value: 72.35202492211837 - type: precision value: 70.60358255451713 - type: recall value: 76.63551401869158 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.4811111111111 - type: precision value: 87.7452380952381 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95 - type: f1 value: 93.60666666666667 - type: precision value: 92.975 - type: recall value: 95 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 63.01595782872099 - type: precision value: 61.596587301587306 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.52999999999999 - type: precision value: 94 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.28999999999999 - type: precision value: 92.675 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.75 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.9 - type: f1 value: 89.83 - type: precision value: 88.92 - type: recall value: 91.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.34222222222223 - type: precision value: 92.75416666666668 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.333333333333336 - type: f1 value: 55.31203703703703 - type: precision value: 53.39971108326371 - type: recall value: 60.333333333333336 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.9 - type: f1 value: 11.099861903031458 - type: precision value: 10.589187932631877 - type: recall value: 12.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7 - type: f1 value: 83.0152380952381 - type: precision value: 81.37833333333333 - type: recall value: 86.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.39285714285714 - type: f1 value: 56.832482993197274 - type: precision value: 54.56845238095237 - type: recall value: 63.39285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.73765093304062 - type: f1 value: 41.555736920720456 - type: precision value: 39.06874531737319 - type: recall value: 48.73765093304062 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 41.099999999999994 - type: f1 value: 36.540165945165946 - type: precision value: 35.05175685425686 - type: recall value: 41.099999999999994 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.42333333333333 - type: precision value: 92.75833333333333 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.63333333333334 - type: precision value: 93.01666666666665 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.9 - type: f1 value: 73.64833333333334 - type: precision value: 71.90282106782105 - type: recall value: 77.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.4 - type: f1 value: 54.90521367521367 - type: precision value: 53.432840025471606 - type: recall value: 59.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.6 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 62.25926129426129 - type: precision value: 60.408376623376626 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.60666666666667 - type: precision value: 86.45277777777778 - type: recall value: 90.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97 - type: precision value: 96.65 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.39746031746031 - type: precision value: 90.6125 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.11678832116788 - type: f1 value: 27.210415386260234 - type: precision value: 26.20408990846947 - type: recall value: 32.11678832116788 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.787319277832475 - type: precision value: 6.3452094433344435 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.08 - type: precision value: 94.61666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.3 - type: f1 value: 93.88333333333333 - type: precision value: 93.18333333333332 - type: recall value: 95.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.11904761904762 - type: f1 value: 80.69444444444444 - type: precision value: 78.72023809523809 - type: recall value: 85.11904761904762 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.1 - type: f1 value: 9.276381801735853 - type: precision value: 8.798174603174601 - type: recall value: 11.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.56107660455487 - type: f1 value: 58.70433569191332 - type: precision value: 56.896926581464015 - type: recall value: 63.56107660455487 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.10000000000001 - type: precision value: 92.35 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 96.01222222222222 - type: precision value: 95.67083333333332 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.2 - type: f1 value: 7.911555250305249 - type: precision value: 7.631246556216846 - type: recall value: 9.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.48917748917748 - type: f1 value: 72.27375798804371 - type: precision value: 70.14430014430013 - type: recall value: 77.48917748917748 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.09923664122137 - type: f1 value: 72.61541257724463 - type: precision value: 70.8998380754106 - type: recall value: 77.09923664122137 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2532751091703 - type: f1 value: 97.69529354682193 - type: precision value: 97.42843279961184 - type: recall value: 98.2532751091703 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.8 - type: f1 value: 79.14672619047619 - type: precision value: 77.59489247311828 - type: recall value: 82.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.35028248587571 - type: f1 value: 92.86252354048965 - type: precision value: 92.2080979284369 - type: recall value: 94.35028248587571 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.282429263935621 - type: precision value: 5.783274240739785 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 91.025 - type: precision value: 90.30428571428571 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81 - type: f1 value: 77.8232380952381 - type: precision value: 76.60194444444444 - type: recall value: 81 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91 - type: f1 value: 88.70857142857142 - type: precision value: 87.7 - type: recall value: 91 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.3 - type: precision value: 94.76666666666667 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.1 - type: f1 value: 7.001008218834307 - type: precision value: 6.708329562594269 - type: recall value: 8.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.1313672922252 - type: f1 value: 84.09070598748882 - type: precision value: 82.79171454104429 - type: recall value: 87.1313672922252 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.73333333333332 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 42.29249011857708 - type: f1 value: 36.981018542283365 - type: precision value: 35.415877813576024 - type: recall value: 42.29249011857708 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.80281690140845 - type: f1 value: 80.86854460093896 - type: precision value: 79.60093896713614 - type: recall value: 83.80281690140845 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 45.26946107784431 - type: f1 value: 39.80235464678088 - type: precision value: 38.14342660001342 - type: recall value: 45.26946107784431 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.9 - type: precision value: 92.26666666666668 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.93103448275862 - type: f1 value: 33.15192743764172 - type: precision value: 31.57456528146183 - type: recall value: 37.93103448275862 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.01408450704226 - type: f1 value: 63.41549295774648 - type: precision value: 61.342778895595806 - type: recall value: 69.01408450704226 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.66666666666667 - type: f1 value: 71.60705960705961 - type: precision value: 69.60683760683762 - type: recall value: 76.66666666666667 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.48333333333333 - type: precision value: 93.83333333333333 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.81837160751566 - type: f1 value: 48.435977731384824 - type: precision value: 47.11291973845539 - type: recall value: 52.81837160751566 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.9 - type: f1 value: 38.88962621607783 - type: precision value: 36.95936507936508 - type: recall value: 44.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.55374592833876 - type: f1 value: 88.22553125484721 - type: precision value: 87.26927252985884 - type: recall value: 90.55374592833876 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.13333333333333 - type: precision value: 92.45333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.99666666666667 - type: precision value: 91.26666666666668 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.03937007874016 - type: f1 value: 81.75853018372703 - type: precision value: 80.34120734908137 - type: recall value: 85.03937007874016 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.3 - type: f1 value: 85.5 - type: precision value: 84.25833333333334 - type: recall value: 88.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.51246537396122 - type: f1 value: 60.02297410192148 - type: precision value: 58.133467727289236 - type: recall value: 65.51246537396122 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.89 - type: precision value: 94.39166666666667 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.692307692307686 - type: f1 value: 53.162393162393165 - type: precision value: 51.70673076923077 - type: recall value: 57.692307692307686 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.60000000000001 - type: f1 value: 89.21190476190475 - type: precision value: 88.08666666666667 - type: recall value: 91.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88 - type: f1 value: 85.47 - type: precision value: 84.43266233766234 - type: recall value: 88 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 90.64999999999999 - type: precision value: 89.68333333333332 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.30660377358491 - type: f1 value: 76.33044137466307 - type: precision value: 74.78970125786164 - type: recall value: 80.30660377358491 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.44 - type: precision value: 94.99166666666666 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.53284671532847 - type: f1 value: 95.37712895377129 - type: precision value: 94.7992700729927 - type: recall value: 96.53284671532847 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89 - type: f1 value: 86.23190476190476 - type: precision value: 85.035 - type: recall value: 89 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.585 - type: map_at_10 value: 9.012 - type: map_at_100 value: 14.027000000000001 - type: map_at_1000 value: 15.565000000000001 - type: map_at_3 value: 5.032 - type: map_at_5 value: 6.657 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.377 - type: mrr_at_100 value: 46.119 - type: mrr_at_1000 value: 46.127 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 42.585 - type: ndcg_at_1 value: 27.551 - type: ndcg_at_10 value: 23.395 - type: ndcg_at_100 value: 33.342 - type: ndcg_at_1000 value: 45.523 - type: ndcg_at_3 value: 25.158 - type: ndcg_at_5 value: 23.427 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 21.429000000000002 - type: precision_at_100 value: 6.714 - type: precision_at_1000 value: 1.473 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.585 - type: recall_at_10 value: 15.418999999999999 - type: recall_at_100 value: 42.485 - type: recall_at_1000 value: 79.536 - type: recall_at_3 value: 6.239999999999999 - type: recall_at_5 value: 8.996 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.3234 - type: ap value: 14.361688653847423 - type: f1 value: 54.819068624319044 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.97792869269949 - type: f1 value: 62.28965628513728 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.90540145385218 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.53513739047506 - type: cos_sim_ap value: 75.27741586677557 - type: cos_sim_f1 value: 69.18792902473774 - type: cos_sim_precision value: 67.94708725515136 - type: cos_sim_recall value: 70.47493403693932 - type: dot_accuracy value: 84.7052512368123 - type: dot_ap value: 69.36075482849378 - type: dot_f1 value: 64.44688376631296 - type: dot_precision value: 59.92288500793831 - type: dot_recall value: 69.70976253298153 - type: euclidean_accuracy value: 86.60666388508076 - type: euclidean_ap value: 75.47512772621097 - type: euclidean_f1 value: 69.413872536473 - type: euclidean_precision value: 67.39562624254472 - type: euclidean_recall value: 71.55672823218997 - type: manhattan_accuracy value: 86.52917684925792 - type: manhattan_ap value: 75.34000110496703 - type: manhattan_f1 value: 69.28489190226429 - type: manhattan_precision value: 67.24608889992551 - type: manhattan_recall value: 71.45118733509234 - type: max_accuracy value: 86.60666388508076 - type: max_ap value: 75.47512772621097 - type: max_f1 value: 69.413872536473 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01695967710637 - type: cos_sim_ap value: 85.8298270742901 - type: cos_sim_f1 value: 78.46988128389272 - type: cos_sim_precision value: 74.86017897091722 - type: cos_sim_recall value: 82.44533415460425 - type: dot_accuracy value: 88.19420188613343 - type: dot_ap value: 83.82679165901324 - type: dot_f1 value: 76.55833777304208 - type: dot_precision value: 75.6884875846501 - type: dot_recall value: 77.44841392054204 - type: euclidean_accuracy value: 89.03054294252338 - type: euclidean_ap value: 85.89089555185325 - type: euclidean_f1 value: 78.62997658079624 - type: euclidean_precision value: 74.92329149232914 - type: euclidean_recall value: 82.72251308900523 - type: manhattan_accuracy value: 89.0266620095471 - type: manhattan_ap value: 85.86458997929147 - type: manhattan_f1 value: 78.50685331000291 - type: manhattan_precision value: 74.5499861534201 - type: manhattan_recall value: 82.90729904527257 - type: max_accuracy value: 89.03054294252338 - type: max_ap value: 85.89089555185325 - type: max_f1 value: 78.62997658079624 --- ## Multilingual-E5-large Quantized This is a re-upload of **intfloat/multilingual-e5-large** with an additional quantized version of the embeddings model included in the *onnx* folder. The quantization was done with the quantize.py script from xenova/transformers.js library. Here's the original model card: ## Multilingual-E5-large [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 24 layers and the embedding size is 1024. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ", even for non-English texts. # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"] tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large') model = AutoModel.from_pretrained('intfloat/multilingual-e5-large') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Supported Languages This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) **First stage**: contrastive pre-training with weak supervision | Dataset | Weak supervision | # of text pairs | |--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------| | Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B | | [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M | | [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B | | [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M | | Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M | | [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M | | [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M | | [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M | | [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M | **Second stage**: supervised fine-tuning | Dataset | Language | # of text pairs | |----------------------------------------------------------------------------------------|--------------|-----------------| | [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k | | [NQ](https://github.com/facebookresearch/DPR) | English | 70k | | [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k | | [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k | | [ELI5](https://huggingface.co/datasets/eli5) | English | 500k | | [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k | | [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [SQuAD](https://huggingface.co/datasets/squad) | English | 87k | | [Quora](https://huggingface.co/datasets/quora) | English | 150k | | [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k | | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k | For all labeled datasets, we only use its training set for fine-tuning. For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787) | Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th | |-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- | | BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 | | mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 | | BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 | | | | | multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 | | multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 | | multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 | ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/multilingual-e5-large') input_texts = [ 'query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
## Multilingual-E5-large Quantized This is a re-upload of **intfloat/multilingual-e5-large** with an additional quantized version of the embeddings model included in the *onnx* folder. The quantization was done with the quantize.py script from xenova/transformers.js library. Here's the original model card: ## Multilingual-E5-large [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 24 layers and the embedding size is 1024. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ", even for non-English texts. # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"] tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large') model = AutoModel.from_pretrained('intfloat/multilingual-e5-large') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Supported Languages This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) **First stage**: contrastive pre-training with weak supervision | Dataset | Weak supervision | # of text pairs | |--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------| | Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B | | [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M | | [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B | | [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M | | Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M | | [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M | | [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M | | [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M | | [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M | **Second stage**: supervised fine-tuning | Dataset | Language | # of text pairs | |----------------------------------------------------------------------------------------|--------------|-----------------| | [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k | | [NQ](https://github.com/facebookresearch/DPR) | English | 70k | | [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k | | [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k | | [ELI5](https://huggingface.co/datasets/eli5) | English | 500k | | [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k | | [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [SQuAD](https://huggingface.co/datasets/squad) | English | 87k | | [Quora](https://huggingface.co/datasets/quora) | English | 150k | | [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k | | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k | For all labeled datasets, we only use its training set for fine-tuning. For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787) | Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th | |-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- | | BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 | | mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 | | BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 | | | | | multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 | | multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 | | multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 | ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/multilingual-e5-large') input_texts = [ 'query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"], "license": "mit", "tags": ["mteb", "Sentence Transformers", "sentence-similarity", "feature-extraction", "sentence-transformers"], "model-index": [{"name": "multilingual-e5-large", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 79.05970149253731}, {"type": "ap", "value": 43.486574390835635}, {"type": "f1", "value": 73.32700092140148}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (de)", "type": "mteb/amazon_counterfactual", "config": "de", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 71.22055674518201}, {"type": "ap", "value": 81.55756710830498}, {"type": "f1", "value": 69.28271787752661}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en-ext)", "type": "mteb/amazon_counterfactual", "config": "en-ext", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 80.41979010494754}, {"type": "ap", "value": 29.34879922376344}, {"type": "f1", "value": 67.62475449011278}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (ja)", "type": "mteb/amazon_counterfactual", "config": "ja", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 77.8372591006424}, {"type": "ap", "value": 26.557560591210738}, {"type": "f1", "value": 64.96619417368707}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 93.489875}, {"type": "ap", "value": 90.98758636917603}, {"type": "f1", "value": 93.48554819717332}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 47.564}, {"type": "f1", "value": 46.75122173518047}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (de)", "type": "mteb/amazon_reviews_multi", "config": "de", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 45.400000000000006}, {"type": "f1", "value": 44.17195682400632}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (es)", "type": "mteb/amazon_reviews_multi", "config": "es", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 43.068}, {"type": "f1", "value": 42.38155696855596}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 41.89}, {"type": "f1", "value": 40.84407321682663}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (ja)", "type": "mteb/amazon_reviews_multi", "config": "ja", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 40.120000000000005}, {"type": "f1", "value": 39.522976223819114}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (zh)", "type": "mteb/amazon_reviews_multi", "config": "zh", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 38.832}, {"type": "f1", "value": 38.0392533394713}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 30.725}, {"type": "map_at_10", "value": 46.055}, {"type": "map_at_100", "value": 46.900999999999996}, {"type": "map_at_1000", "value": 46.911}, {"type": "map_at_3", "value": 41.548}, {"type": "map_at_5", "value": 44.297}, {"type": "mrr_at_1", "value": 31.152}, {"type": "mrr_at_10", "value": 46.231}, {"type": "mrr_at_100", "value": 47.07}, {"type": "mrr_at_1000", "value": 47.08}, {"type": "mrr_at_3", "value": 41.738}, {"type": "mrr_at_5", "value": 44.468999999999994}, {"type": "ndcg_at_1", "value": 30.725}, {"type": "ndcg_at_10", "value": 54.379999999999995}, {"type": "ndcg_at_100", "value": 58.138}, {"type": "ndcg_at_1000", "value": 58.389}, {"type": "ndcg_at_3", "value": 45.156}, {"type": "ndcg_at_5", "value": 50.123}, {"type": "precision_at_1", "value": 30.725}, {"type": "precision_at_10", "value": 8.087}, {"type": "precision_at_100", "value": 0.9769999999999999}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 18.54}, {"type": "precision_at_5", "value": 13.542000000000002}, {"type": "recall_at_1", "value": 30.725}, {"type": "recall_at_10", "value": 80.868}, {"type": "recall_at_100", "value": 97.653}, {"type": "recall_at_1000", "value": 99.57300000000001}, {"type": "recall_at_3", "value": 55.619}, {"type": "recall_at_5", "value": 67.71000000000001}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 44.30960650674069}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 38.427074197498996}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 60.28270056031872}, {"type": "mrr", "value": 74.38332673789738}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.05942144105269}, {"type": "cos_sim_spearman", "value": 82.51212105850809}, {"type": "euclidean_pearson", "value": 81.95639829909122}, {"type": "euclidean_spearman", "value": 82.3717564144213}, {"type": "manhattan_pearson", "value": 81.79273425468256}, {"type": "manhattan_spearman", "value": 82.20066817871039}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (de-en)", "type": "mteb/bucc-bitext-mining", "config": "de-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 99.46764091858039}, {"type": "f1", "value": 99.37717466945023}, {"type": "precision", "value": 99.33194154488518}, {"type": "recall", "value": 99.46764091858039}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (fr-en)", "type": "mteb/bucc-bitext-mining", "config": "fr-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 98.29407880255337}, {"type": "f1", "value": 98.11248073959938}, {"type": "precision", "value": 98.02443319392472}, {"type": "recall", "value": 98.29407880255337}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (ru-en)", "type": "mteb/bucc-bitext-mining", "config": "ru-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 97.79009352268791}, {"type": "f1", "value": 97.5176076665512}, {"type": "precision", "value": 97.38136473848286}, {"type": "recall", "value": 97.79009352268791}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (zh-en)", "type": "mteb/bucc-bitext-mining", "config": "zh-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 99.26276987888363}, {"type": "f1", "value": 99.20133403545726}, {"type": "precision", "value": 99.17500438827453}, {"type": "recall", "value": 99.26276987888363}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 84.72727272727273}, {"type": "f1", "value": 84.67672206031433}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 35.34220182511161}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 33.4987096128766}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 25.558249999999997}, {"type": "map_at_10", "value": 34.44425000000001}, {"type": "map_at_100", "value": 35.59833333333333}, {"type": "map_at_1000", "value": 35.706916666666665}, {"type": "map_at_3", "value": 31.691749999999995}, {"type": "map_at_5", "value": 33.252916666666664}, {"type": "mrr_at_1", "value": 30.252666666666666}, {"type": "mrr_at_10", "value": 38.60675}, {"type": "mrr_at_100", "value": 39.42666666666666}, {"type": "mrr_at_1000", "value": 39.48408333333334}, {"type": "mrr_at_3", "value": 36.17441666666665}, {"type": "mrr_at_5", "value": 37.56275}, {"type": "ndcg_at_1", "value": 30.252666666666666}, {"type": "ndcg_at_10", "value": 39.683}, {"type": "ndcg_at_100", "value": 44.68541666666667}, {"type": "ndcg_at_1000", "value": 46.94316666666668}, {"type": "ndcg_at_3", "value": 34.961749999999995}, {"type": "ndcg_at_5", "value": 37.215666666666664}, {"type": "precision_at_1", "value": 30.252666666666666}, {"type": "precision_at_10", "value": 6.904166666666667}, {"type": "precision_at_100", "value": 1.0989999999999995}, {"type": "precision_at_1000", "value": 0.14733333333333334}, {"type": "precision_at_3", "value": 16.037666666666667}, {"type": "precision_at_5", "value": 11.413583333333333}, {"type": "recall_at_1", "value": 25.558249999999997}, {"type": "recall_at_10", "value": 51.13341666666666}, {"type": "recall_at_100", "value": 73.08366666666667}, {"type": "recall_at_1000", "value": 88.79483333333334}, {"type": "recall_at_3", "value": 37.989083333333326}, {"type": "recall_at_5", "value": 43.787833333333325}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 10.338}, {"type": "map_at_10", "value": 18.360000000000003}, {"type": "map_at_100", "value": 19.942}, {"type": "map_at_1000", "value": 20.134}, {"type": "map_at_3", "value": 15.174000000000001}, {"type": "map_at_5", "value": 16.830000000000002}, {"type": "mrr_at_1", "value": 23.257}, {"type": "mrr_at_10", "value": 33.768}, {"type": "mrr_at_100", "value": 34.707}, {"type": "mrr_at_1000", "value": 34.766000000000005}, {"type": "mrr_at_3", "value": 30.977}, {"type": "mrr_at_5", "value": 32.528}, {"type": "ndcg_at_1", "value": 23.257}, {"type": "ndcg_at_10", "value": 25.733}, {"type": "ndcg_at_100", "value": 32.288}, {"type": "ndcg_at_1000", "value": 35.992000000000004}, {"type": "ndcg_at_3", "value": 20.866}, {"type": "ndcg_at_5", "value": 22.612}, {"type": "precision_at_1", "value": 23.257}, {"type": "precision_at_10", "value": 8.124}, {"type": "precision_at_100", "value": 1.518}, {"type": "precision_at_1000", "value": 0.219}, {"type": "precision_at_3", "value": 15.679000000000002}, {"type": "precision_at_5", "value": 12.117}, {"type": "recall_at_1", "value": 10.338}, {"type": "recall_at_10", "value": 31.154}, {"type": "recall_at_100", "value": 54.161}, {"type": "recall_at_1000", "value": 75.21900000000001}, {"type": "recall_at_3", "value": 19.427}, {"type": "recall_at_5", "value": 24.214}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 8.498}, {"type": "map_at_10", "value": 19.103}, {"type": "map_at_100", "value": 27.375}, {"type": "map_at_1000", "value": 28.981}, {"type": "map_at_3", "value": 13.764999999999999}, {"type": "map_at_5", "value": 15.950000000000001}, {"type": "mrr_at_1", "value": 65.5}, {"type": "mrr_at_10", "value": 74.53800000000001}, {"type": "mrr_at_100", "value": 74.71799999999999}, {"type": "mrr_at_1000", "value": 74.725}, {"type": "mrr_at_3", "value": 72.792}, {"type": "mrr_at_5", "value": 73.554}, {"type": "ndcg_at_1", "value": 53.37499999999999}, {"type": "ndcg_at_10", "value": 41.286}, {"type": "ndcg_at_100", "value": 45.972}, {"type": "ndcg_at_1000", "value": 53.123}, {"type": "ndcg_at_3", "value": 46.172999999999995}, {"type": "ndcg_at_5", "value": 43.033}, {"type": "precision_at_1", "value": 65.5}, {"type": "precision_at_10", "value": 32.725}, {"type": "precision_at_100", "value": 10.683}, {"type": "precision_at_1000", "value": 1.978}, {"type": "precision_at_3", "value": 50}, {"type": "precision_at_5", "value": 41.349999999999994}, {"type": "recall_at_1", "value": 8.498}, {"type": "recall_at_10", "value": 25.070999999999998}, {"type": "recall_at_100", "value": 52.383}, {"type": "recall_at_1000", "value": 74.91499999999999}, {"type": "recall_at_3", "value": 15.207999999999998}, {"type": "recall_at_5", "value": 18.563}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 46.5}, {"type": "f1", "value": 41.93833713984145}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 67.914}, {"type": "map_at_10", "value": 78.10000000000001}, {"type": "map_at_100", "value": 78.333}, {"type": "map_at_1000", "value": 78.346}, {"type": "map_at_3", "value": 76.626}, {"type": "map_at_5", "value": 77.627}, {"type": "mrr_at_1", "value": 72.74199999999999}, {"type": "mrr_at_10", "value": 82.414}, {"type": "mrr_at_100", "value": 82.511}, {"type": "mrr_at_1000", "value": 82.513}, {"type": "mrr_at_3", "value": 81.231}, {"type": "mrr_at_5", "value": 82.065}, {"type": "ndcg_at_1", "value": 72.74199999999999}, {"type": "ndcg_at_10", "value": 82.806}, {"type": "ndcg_at_100", "value": 83.677}, {"type": "ndcg_at_1000", "value": 83.917}, {"type": "ndcg_at_3", "value": 80.305}, {"type": "ndcg_at_5", "value": 81.843}, {"type": "precision_at_1", "value": 72.74199999999999}, {"type": "precision_at_10", "value": 10.24}, {"type": "precision_at_100", "value": 1.089}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 31.268}, {"type": "precision_at_5", "value": 19.706000000000003}, {"type": "recall_at_1", "value": 67.914}, {"type": "recall_at_10", "value": 92.889}, {"type": "recall_at_100", "value": 96.42699999999999}, {"type": "recall_at_1000", "value": 97.92}, {"type": "recall_at_3", "value": 86.21}, {"type": "recall_at_5", "value": 90.036}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 22.166}, {"type": "map_at_10", "value": 35.57}, {"type": "map_at_100", "value": 37.405}, {"type": "map_at_1000", "value": 37.564}, {"type": "map_at_3", "value": 30.379}, {"type": "map_at_5", "value": 33.324}, {"type": "mrr_at_1", "value": 43.519000000000005}, {"type": "mrr_at_10", "value": 51.556000000000004}, {"type": "mrr_at_100", "value": 52.344}, {"type": "mrr_at_1000", "value": 52.373999999999995}, {"type": "mrr_at_3", "value": 48.868}, {"type": "mrr_at_5", "value": 50.319}, {"type": "ndcg_at_1", "value": 43.519000000000005}, {"type": "ndcg_at_10", "value": 43.803}, {"type": "ndcg_at_100", "value": 50.468999999999994}, {"type": "ndcg_at_1000", "value": 53.111}, {"type": "ndcg_at_3", "value": 38.893}, {"type": "ndcg_at_5", "value": 40.653}, {"type": "precision_at_1", "value": 43.519000000000005}, {"type": "precision_at_10", "value": 12.253}, {"type": "precision_at_100", "value": 1.931}, {"type": "precision_at_1000", "value": 0.242}, {"type": "precision_at_3", "value": 25.617}, {"type": "precision_at_5", "value": 19.383}, {"type": "recall_at_1", "value": 22.166}, {"type": "recall_at_10", "value": 51.6}, {"type": "recall_at_100", "value": 76.574}, {"type": "recall_at_1000", "value": 92.192}, {"type": "recall_at_3", "value": 34.477999999999994}, {"type": "recall_at_5", "value": 41.835}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 39.041}, {"type": "map_at_10", "value": 62.961999999999996}, {"type": "map_at_100", "value": 63.79899999999999}, {"type": "map_at_1000", "value": 63.854}, {"type": "map_at_3", "value": 59.399}, {"type": "map_at_5", "value": 61.669}, {"type": "mrr_at_1", "value": 78.082}, {"type": "mrr_at_10", "value": 84.321}, {"type": "mrr_at_100", "value": 84.49600000000001}, {"type": "mrr_at_1000", "value": 84.502}, {"type": "mrr_at_3", "value": 83.421}, {"type": "mrr_at_5", "value": 83.977}, {"type": "ndcg_at_1", "value": 78.082}, {"type": "ndcg_at_10", "value": 71.229}, {"type": "ndcg_at_100", "value": 74.10900000000001}, {"type": "ndcg_at_1000", "value": 75.169}, {"type": "ndcg_at_3", "value": 66.28699999999999}, {"type": "ndcg_at_5", "value": 69.084}, {"type": "precision_at_1", "value": 78.082}, {"type": "precision_at_10", "value": 14.993}, {"type": "precision_at_100", "value": 1.7239999999999998}, {"type": "precision_at_1000", "value": 0.186}, {"type": "precision_at_3", "value": 42.737}, {"type": "precision_at_5", "value": 27.843}, {"type": "recall_at_1", "value": 39.041}, {"type": "recall_at_10", "value": 74.96300000000001}, {"type": "recall_at_100", "value": 86.199}, {"type": "recall_at_1000", "value": 93.228}, {"type": "recall_at_3", "value": 64.105}, {"type": "recall_at_5", "value": 69.608}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 90.23160000000001}, {"type": "ap", "value": 85.5674856808308}, {"type": "f1", "value": 90.18033354786317}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 24.091}, {"type": "map_at_10", "value": 36.753}, {"type": "map_at_100", "value": 37.913000000000004}, {"type": "map_at_1000", "value": 37.958999999999996}, {"type": "map_at_3", "value": 32.818999999999996}, {"type": "map_at_5", "value": 35.171}, {"type": "mrr_at_1", "value": 24.742}, {"type": "mrr_at_10", "value": 37.285000000000004}, {"type": "mrr_at_100", "value": 38.391999999999996}, {"type": "mrr_at_1000", "value": 38.431}, {"type": "mrr_at_3", "value": 33.440999999999995}, {"type": "mrr_at_5", "value": 35.75}, {"type": "ndcg_at_1", "value": 24.742}, {"type": "ndcg_at_10", "value": 43.698}, {"type": "ndcg_at_100", "value": 49.145}, {"type": "ndcg_at_1000", "value": 50.23800000000001}, {"type": "ndcg_at_3", "value": 35.769}, {"type": "ndcg_at_5", "value": 39.961999999999996}, {"type": "precision_at_1", "value": 24.742}, {"type": "precision_at_10", "value": 6.7989999999999995}, {"type": "precision_at_100", "value": 0.95}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 15.096000000000002}, {"type": "precision_at_5", "value": 11.183}, {"type": "recall_at_1", "value": 24.091}, {"type": "recall_at_10", "value": 65.068}, {"type": "recall_at_100", "value": 89.899}, {"type": "recall_at_1000", "value": 98.16}, {"type": "recall_at_3", "value": 43.68}, {"type": "recall_at_5", "value": 53.754999999999995}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 93.66621067031465}, {"type": "f1", "value": 93.49622853272142}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (de)", "type": "mteb/mtop_domain", "config": "de", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 91.94702733164272}, {"type": "f1", "value": 91.17043441745282}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (es)", "type": "mteb/mtop_domain", "config": "es", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 92.20146764509674}, {"type": "f1", "value": 91.98359080555608}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 88.99780770435328}, {"type": "f1", "value": 89.19746342724068}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (hi)", "type": "mteb/mtop_domain", "config": "hi", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 89.78486912871998}, {"type": "f1", "value": 89.24578823628642}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (th)", "type": "mteb/mtop_domain", "config": "th", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 88.74502712477394}, {"type": "f1", "value": 89.00297573881542}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 77.9046967624259}, {"type": "f1", "value": 59.36787125785957}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (de)", "type": "mteb/mtop_intent", "config": "de", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 74.5280360664976}, {"type": "f1", "value": 57.17723440888718}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (es)", "type": "mteb/mtop_intent", "config": "es", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 75.44029352901934}, {"type": "f1", "value": 54.052855531072964}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 70.5606013153774}, {"type": "f1", "value": 52.62215934386531}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (hi)", "type": "mteb/mtop_intent", "config": "hi", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 73.11581211903908}, {"type": "f1", "value": 52.341291845645465}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (th)", "type": "mteb/mtop_intent", "config": "th", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 74.28933092224233}, {"type": "f1", "value": 57.07918745504911}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (af)", "type": "mteb/amazon_massive_intent", "config": "af", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.38063214525892}, {"type": "f1", "value": 59.46463723443009}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (am)", "type": "mteb/amazon_massive_intent", "config": "am", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 56.06926698049766}, {"type": "f1", "value": 52.49084283283562}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ar)", "type": "mteb/amazon_massive_intent", "config": "ar", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.74983187626093}, {"type": "f1", "value": 56.960640620165904}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (az)", "type": "mteb/amazon_massive_intent", "config": "az", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 64.86550100874243}, {"type": "f1", "value": 62.47370548140688}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (bn)", "type": "mteb/amazon_massive_intent", "config": "bn", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 63.971082716879636}, {"type": "f1", "value": 61.03812421957381}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (cy)", "type": "mteb/amazon_massive_intent", "config": "cy", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 54.98318762609282}, {"type": "f1", "value": 51.51207916008392}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (da)", "type": "mteb/amazon_massive_intent", "config": "da", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.45527908540686}, {"type": "f1", "value": 66.16631905400318}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (de)", "type": "mteb/amazon_massive_intent", "config": "de", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.32750504371216}, {"type": "f1", "value": 66.16755288646591}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (el)", "type": "mteb/amazon_massive_intent", "config": "el", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.09213180901143}, {"type": "f1", "value": 66.95654394661507}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 73.75588433086752}, {"type": "f1", "value": 71.79973779656923}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (es)", "type": "mteb/amazon_massive_intent", "config": "es", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 70.49428379287154}, {"type": "f1", "value": 68.37494379215734}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fa)", "type": "mteb/amazon_massive_intent", "config": "fa", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.90921318090115}, {"type": "f1", "value": 66.79517376481645}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fi)", "type": "mteb/amazon_massive_intent", "config": "fi", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 70.12104909213181}, {"type": "f1", "value": 67.29448842879584}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fr)", "type": "mteb/amazon_massive_intent", "config": "fr", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.34095494283793}, {"type": "f1", "value": 67.01134288992947}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (he)", "type": "mteb/amazon_massive_intent", "config": "he", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 67.61264290517822}, {"type": "f1", "value": 64.68730512660757}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hi)", "type": "mteb/amazon_massive_intent", "config": "hi", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 67.79757901815738}, {"type": "f1", "value": 65.24938539425598}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hu)", "type": "mteb/amazon_massive_intent", "config": "hu", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.68728984532616}, {"type": "f1", "value": 67.0487169762553}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hy)", "type": "mteb/amazon_massive_intent", "config": "hy", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.07464694014795}, {"type": "f1", "value": 59.183532276789286}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (id)", "type": "mteb/amazon_massive_intent", "config": "id", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 70.04707464694015}, {"type": "f1", "value": 67.66829629003848}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (is)", "type": "mteb/amazon_massive_intent", "config": "is", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.42434431741762}, {"type": "f1", "value": 59.01617226544757}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (it)", "type": "mteb/amazon_massive_intent", "config": "it", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 70.53127101546738}, {"type": "f1", "value": 68.10033760906255}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ja)", "type": "mteb/amazon_massive_intent", "config": "ja", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 72.50504371217215}, {"type": "f1", "value": 69.74931103158923}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (jv)", "type": "mteb/amazon_massive_intent", "config": "jv", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 57.91190316072628}, {"type": "f1", "value": 54.05551136648796}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ka)", "type": "mteb/amazon_massive_intent", "config": "ka", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 51.78211163416275}, {"type": "f1", "value": 49.874888544058535}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (km)", "type": "mteb/amazon_massive_intent", "config": "km", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 47.017484868863484}, {"type": "f1", "value": 44.53364263352014}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (kn)", "type": "mteb/amazon_massive_intent", "config": "kn", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.16207128446537}, {"type": "f1", "value": 59.01185692320829}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ko)", "type": "mteb/amazon_massive_intent", "config": "ko", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.42501681237391}, {"type": "f1", "value": 67.13169450166086}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (lv)", "type": "mteb/amazon_massive_intent", "config": "lv", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 67.0780094149294}, {"type": "f1", "value": 64.41720167850707}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ml)", "type": "mteb/amazon_massive_intent", "config": "ml", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 65.57162071284466}, {"type": "f1", "value": 62.414138683804424}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (mn)", "type": "mteb/amazon_massive_intent", "config": "mn", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 61.71149966375252}, {"type": "f1", "value": 58.594805125087234}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ms)", "type": "mteb/amazon_massive_intent", "config": "ms", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 66.03900470746471}, {"type": "f1", "value": 63.87937257883887}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (my)", "type": "mteb/amazon_massive_intent", "config": "my", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.8776059179556}, {"type": "f1", "value": 57.48587618059131}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nb)", "type": "mteb/amazon_massive_intent", "config": "nb", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.87895090786819}, {"type": "f1", "value": 66.8141299430347}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nl)", "type": "mteb/amazon_massive_intent", "config": "nl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 70.45057162071285}, {"type": "f1", "value": 67.46444039673516}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pl)", "type": "mteb/amazon_massive_intent", "config": "pl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 71.546738399462}, {"type": "f1", "value": 68.63640876702655}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pt)", "type": "mteb/amazon_massive_intent", "config": "pt", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 70.72965702757229}, {"type": "f1", "value": 68.54119560379115}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ro)", "type": "mteb/amazon_massive_intent", "config": "ro", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 68.35574983187625}, {"type": "f1", "value": 65.88844917691927}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ru)", "type": "mteb/amazon_massive_intent", "config": "ru", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 71.70477471418964}, {"type": "f1", "value": 69.19665697061978}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sl)", "type": "mteb/amazon_massive_intent", "config": "sl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 67.0880968392737}, {"type": "f1", "value": 64.76962317666086}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sq)", "type": "mteb/amazon_massive_intent", "config": "sq", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 65.18493611297916}, {"type": "f1", "value": 62.49984559035371}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sv)", "type": "mteb/amazon_massive_intent", "config": "sv", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 71.75857431069265}, {"type": "f1", "value": 69.20053687623418}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sw)", "type": "mteb/amazon_massive_intent", "config": "sw", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 58.500336247478145}, {"type": "f1", "value": 55.2972398687929}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ta)", "type": "mteb/amazon_massive_intent", "config": "ta", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.68997982515132}, {"type": "f1", "value": 59.36848202755348}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (te)", "type": "mteb/amazon_massive_intent", "config": "te", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 63.01950235373235}, {"type": "f1", "value": 60.09351954625423}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (th)", "type": "mteb/amazon_massive_intent", "config": "th", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 68.29186281102892}, {"type": "f1", "value": 67.57860496703447}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tl)", "type": "mteb/amazon_massive_intent", "config": "tl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 64.77471418964357}, {"type": "f1", "value": 61.913983147713836}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tr)", "type": "mteb/amazon_massive_intent", "config": "tr", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.87222595830532}, {"type": "f1", "value": 66.03679033708141}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ur)", "type": "mteb/amazon_massive_intent", "config": "ur", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 64.04505716207127}, {"type": "f1", "value": 61.28569169817908}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (vi)", "type": "mteb/amazon_massive_intent", "config": "vi", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.38466711499663}, {"type": "f1", "value": 67.20532357036844}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-CN)", "type": "mteb/amazon_massive_intent", "config": "zh-CN", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 71.12306657700067}, {"type": "f1", "value": 68.91251226588182}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-TW)", "type": "mteb/amazon_massive_intent", "config": "zh-TW", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 66.20040349697378}, {"type": "f1", "value": 66.02657347714175}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (af)", "type": "mteb/amazon_massive_scenario", "config": "af", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 68.73907195696032}, {"type": "f1", "value": 66.98484521791418}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (am)", "type": "mteb/amazon_massive_scenario", "config": "am", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 60.58843308675185}, {"type": "f1", "value": 58.95591723092005}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ar)", "type": "mteb/amazon_massive_scenario", "config": "ar", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 66.22730329522528}, {"type": "f1", "value": 66.0894499712115}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (az)", "type": "mteb/amazon_massive_scenario", "config": "az", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 66.48285137861465}, {"type": "f1", "value": 65.21963176785157}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (bn)", "type": "mteb/amazon_massive_scenario", "config": "bn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 67.74714189643578}, {"type": "f1", "value": 66.8212192745412}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (cy)", "type": "mteb/amazon_massive_scenario", "config": "cy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 59.09213180901143}, {"type": "f1", "value": 56.70735546356339}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (da)", "type": "mteb/amazon_massive_scenario", "config": "da", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 75.05716207128448}, {"type": "f1", "value": 74.8413712365364}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (de)", "type": "mteb/amazon_massive_scenario", "config": "de", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.69737726967047}, {"type": "f1", "value": 74.7664341963}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (el)", "type": "mteb/amazon_massive_scenario", "config": "el", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 73.90383322125084}, {"type": "f1", "value": 73.59201554448323}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 77.51176866173503}, {"type": "f1", "value": 77.46104434577758}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (es)", "type": "mteb/amazon_massive_scenario", "config": "es", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.31069266980496}, {"type": "f1", "value": 74.61048660675635}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fa)", "type": "mteb/amazon_massive_scenario", "config": "fa", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 72.95225285810356}, {"type": "f1", "value": 72.33160006574627}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fi)", "type": "mteb/amazon_massive_scenario", "config": "fi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 73.12373907195696}, {"type": "f1", "value": 73.20921012557481}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fr)", "type": "mteb/amazon_massive_scenario", "config": "fr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 73.86684599865501}, {"type": "f1", "value": 73.82348774610831}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (he)", "type": "mteb/amazon_massive_scenario", "config": "he", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.40215198386012}, {"type": "f1", "value": 71.11945183971858}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hi)", "type": "mteb/amazon_massive_scenario", "config": "hi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 72.12844653665098}, {"type": "f1", "value": 71.34450495911766}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hu)", "type": "mteb/amazon_massive_scenario", "config": "hu", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.52252858103566}, {"type": "f1", "value": 73.98878711342999}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hy)", "type": "mteb/amazon_massive_scenario", "config": "hy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.93611297915265}, {"type": "f1", "value": 63.723200467653385}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (id)", "type": "mteb/amazon_massive_scenario", "config": "id", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.11903160726295}, {"type": "f1", "value": 73.82138439467096}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (is)", "type": "mteb/amazon_massive_scenario", "config": "is", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 67.15198386012105}, {"type": "f1", "value": 66.02172193802167}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (it)", "type": "mteb/amazon_massive_scenario", "config": "it", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.32414256893072}, {"type": "f1", "value": 74.30943421170574}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ja)", "type": "mteb/amazon_massive_scenario", "config": "ja", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 77.46805648957633}, {"type": "f1", "value": 77.62808409298209}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (jv)", "type": "mteb/amazon_massive_scenario", "config": "jv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 63.318762609280434}, {"type": "f1", "value": 62.094284066075076}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ka)", "type": "mteb/amazon_massive_scenario", "config": "ka", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 58.34902488231338}, {"type": "f1", "value": 57.12893860987984}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (km)", "type": "mteb/amazon_massive_scenario", "config": "km", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 50.88433086751849}, {"type": "f1", "value": 48.2272350802058}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (kn)", "type": "mteb/amazon_massive_scenario", "config": "kn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 66.4425016812374}, {"type": "f1", "value": 64.61463095996173}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ko)", "type": "mteb/amazon_massive_scenario", "config": "ko", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 75.04707464694015}, {"type": "f1", "value": 75.05099199098998}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (lv)", "type": "mteb/amazon_massive_scenario", "config": "lv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 70.50437121721586}, {"type": "f1", "value": 69.83397721096314}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ml)", "type": "mteb/amazon_massive_scenario", "config": "ml", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 69.94283792871553}, {"type": "f1", "value": 68.8704663703913}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (mn)", "type": "mteb/amazon_massive_scenario", "config": "mn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.79488903833222}, {"type": "f1", "value": 63.615424063345436}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ms)", "type": "mteb/amazon_massive_scenario", "config": "ms", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 69.88231338264963}, {"type": "f1", "value": 68.57892302593237}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (my)", "type": "mteb/amazon_massive_scenario", "config": "my", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 63.248150638870214}, {"type": "f1", "value": 61.06680605338809}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nb)", "type": "mteb/amazon_massive_scenario", "config": "nb", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.84196368527236}, {"type": "f1", "value": 74.52566464968763}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nl)", "type": "mteb/amazon_massive_scenario", "config": "nl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.8285137861466}, {"type": "f1", "value": 74.8853197608802}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pl)", "type": "mteb/amazon_massive_scenario", "config": "pl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.13248150638869}, {"type": "f1", "value": 74.3982040999179}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pt)", "type": "mteb/amazon_massive_scenario", "config": "pt", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 73.49024882313383}, {"type": "f1", "value": 73.82153848368573}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ro)", "type": "mteb/amazon_massive_scenario", "config": "ro", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.72158708809684}, {"type": "f1", "value": 71.85049433180541}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ru)", "type": "mteb/amazon_massive_scenario", "config": "ru", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 75.137861466039}, {"type": "f1", "value": 75.37628348188467}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sl)", "type": "mteb/amazon_massive_scenario", "config": "sl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.86953597848016}, {"type": "f1", "value": 71.87537624521661}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sq)", "type": "mteb/amazon_massive_scenario", "config": "sq", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 70.27572293207801}, {"type": "f1", "value": 68.80017302344231}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sv)", "type": "mteb/amazon_massive_scenario", "config": "sv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 76.09952925353059}, {"type": "f1", "value": 76.07992707688408}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sw)", "type": "mteb/amazon_massive_scenario", "config": "sw", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 63.140551445864155}, {"type": "f1", "value": 61.73855010331415}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ta)", "type": "mteb/amazon_massive_scenario", "config": "ta", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 66.27774041694687}, {"type": "f1", "value": 64.83664868894539}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (te)", "type": "mteb/amazon_massive_scenario", "config": "te", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 66.69468728984533}, {"type": "f1", "value": 64.76239666920868}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (th)", "type": "mteb/amazon_massive_scenario", "config": "th", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 73.44653665097512}, {"type": "f1", "value": 73.14646052013873}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tl)", "type": "mteb/amazon_massive_scenario", "config": "tl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 67.71351714862139}, {"type": "f1", "value": 66.67212180163382}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tr)", "type": "mteb/amazon_massive_scenario", "config": "tr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 73.9946200403497}, {"type": "f1", "value": 73.87348793725525}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ur)", "type": "mteb/amazon_massive_scenario", "config": "ur", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 68.15400134498992}, {"type": "f1", "value": 67.09433241421094}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (vi)", "type": "mteb/amazon_massive_scenario", "config": "vi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 73.11365164761264}, {"type": "f1", "value": 73.59502539433753}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-CN)", "type": "mteb/amazon_massive_scenario", "config": "zh-CN", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 76.82582380632145}, {"type": "f1", "value": 76.89992945316313}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-TW)", "type": "mteb/amazon_massive_scenario", "config": "zh-TW", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.81237390719569}, {"type": "f1", "value": 72.36499770986265}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 31.480506569594695}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 29.71252128004552}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.421396787056548}, {"type": "mrr", "value": 32.48155274872267}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.595}, {"type": "map_at_10", "value": 12.642000000000001}, {"type": "map_at_100", "value": 15.726}, {"type": "map_at_1000", "value": 17.061999999999998}, {"type": "map_at_3", "value": 9.125}, {"type": "map_at_5", "value": 10.866000000000001}, {"type": "mrr_at_1", "value": 43.344}, {"type": "mrr_at_10", "value": 52.227999999999994}, {"type": "mrr_at_100", "value": 52.898999999999994}, {"type": "mrr_at_1000", "value": 52.944}, {"type": "mrr_at_3", "value": 49.845}, {"type": "mrr_at_5", "value": 51.115}, {"type": "ndcg_at_1", "value": 41.949999999999996}, {"type": "ndcg_at_10", "value": 33.995}, {"type": "ndcg_at_100", "value": 30.869999999999997}, {"type": "ndcg_at_1000", "value": 39.487}, {"type": "ndcg_at_3", "value": 38.903999999999996}, {"type": "ndcg_at_5", "value": 37.236999999999995}, {"type": "precision_at_1", "value": 43.344}, {"type": "precision_at_10", "value": 25.480000000000004}, {"type": "precision_at_100", "value": 7.672}, {"type": "precision_at_1000", "value": 2.028}, {"type": "precision_at_3", "value": 36.636}, {"type": "precision_at_5", "value": 32.632}, {"type": "recall_at_1", "value": 5.595}, {"type": "recall_at_10", "value": 16.466}, {"type": "recall_at_100", "value": 31.226}, {"type": "recall_at_1000", "value": 62.778999999999996}, {"type": "recall_at_3", "value": 9.931}, {"type": "recall_at_5", "value": 12.884}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 40.414}, {"type": "map_at_10", "value": 56.754000000000005}, {"type": "map_at_100", "value": 57.457}, {"type": "map_at_1000", "value": 57.477999999999994}, {"type": "map_at_3", "value": 52.873999999999995}, {"type": "map_at_5", "value": 55.175}, {"type": "mrr_at_1", "value": 45.278}, {"type": "mrr_at_10", "value": 59.192}, {"type": "mrr_at_100", "value": 59.650000000000006}, {"type": "mrr_at_1000", "value": 59.665}, {"type": "mrr_at_3", "value": 56.141}, {"type": "mrr_at_5", "value": 57.998000000000005}, {"type": "ndcg_at_1", "value": 45.278}, {"type": "ndcg_at_10", "value": 64.056}, {"type": "ndcg_at_100", "value": 66.89}, {"type": "ndcg_at_1000", "value": 67.364}, {"type": "ndcg_at_3", "value": 56.97}, {"type": "ndcg_at_5", "value": 60.719}, {"type": "precision_at_1", "value": 45.278}, {"type": "precision_at_10", "value": 9.994}, {"type": "precision_at_100", "value": 1.165}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_3", "value": 25.512}, {"type": "precision_at_5", "value": 17.509}, {"type": "recall_at_1", "value": 40.414}, {"type": "recall_at_10", "value": 83.596}, {"type": "recall_at_100", "value": 95.72}, {"type": "recall_at_1000", "value": 99.24}, {"type": "recall_at_3", "value": 65.472}, {"type": "recall_at_5", "value": 74.039}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.352}, {"type": "map_at_10", "value": 84.369}, {"type": "map_at_100", "value": 85.02499999999999}, {"type": "map_at_1000", "value": 85.04}, {"type": "map_at_3", "value": 81.42399999999999}, {"type": "map_at_5", "value": 83.279}, {"type": "mrr_at_1", "value": 81.05}, {"type": "mrr_at_10", "value": 87.401}, {"type": "mrr_at_100", "value": 87.504}, {"type": "mrr_at_1000", "value": 87.505}, {"type": "mrr_at_3", "value": 86.443}, {"type": "mrr_at_5", "value": 87.10799999999999}, {"type": "ndcg_at_1", "value": 81.04}, {"type": "ndcg_at_10", "value": 88.181}, {"type": "ndcg_at_100", "value": 89.411}, {"type": "ndcg_at_1000", "value": 89.507}, {"type": "ndcg_at_3", "value": 85.28099999999999}, {"type": "ndcg_at_5", "value": 86.888}, {"type": "precision_at_1", "value": 81.04}, {"type": "precision_at_10", "value": 13.406}, {"type": "precision_at_100", "value": 1.5350000000000001}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.31}, {"type": "precision_at_5", "value": 24.54}, {"type": "recall_at_1", "value": 70.352}, {"type": "recall_at_10", "value": 95.358}, {"type": "recall_at_100", "value": 99.541}, {"type": "recall_at_1000", "value": 99.984}, {"type": "recall_at_3", "value": 87.111}, {"type": "recall_at_5", "value": 91.643}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 46.54068723291946}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 63.216287629895994}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.023000000000001}, {"type": "map_at_10", "value": 10.071}, {"type": "map_at_100", "value": 11.892}, {"type": "map_at_1000", "value": 12.196}, {"type": "map_at_3", "value": 7.234}, {"type": "map_at_5", "value": 8.613999999999999}, {"type": "mrr_at_1", "value": 19.900000000000002}, {"type": "mrr_at_10", "value": 30.516}, {"type": "mrr_at_100", "value": 31.656000000000002}, {"type": "mrr_at_1000", "value": 31.723000000000003}, {"type": "mrr_at_3", "value": 27.400000000000002}, {"type": "mrr_at_5", "value": 29.270000000000003}, {"type": "ndcg_at_1", "value": 19.900000000000002}, {"type": "ndcg_at_10", "value": 17.474}, {"type": "ndcg_at_100", "value": 25.020999999999997}, {"type": "ndcg_at_1000", "value": 30.728}, {"type": "ndcg_at_3", "value": 16.588}, {"type": "ndcg_at_5", "value": 14.498}, {"type": "precision_at_1", "value": 19.900000000000002}, {"type": "precision_at_10", "value": 9.139999999999999}, {"type": "precision_at_100", "value": 2.011}, {"type": "precision_at_1000", "value": 0.33899999999999997}, {"type": "precision_at_3", "value": 15.667}, {"type": "precision_at_5", "value": 12.839999999999998}, {"type": "recall_at_1", "value": 4.023000000000001}, {"type": "recall_at_10", "value": 18.497}, {"type": "recall_at_100", "value": 40.8}, {"type": "recall_at_1000", "value": 68.812}, {"type": "recall_at_3", "value": 9.508}, {"type": "recall_at_5", "value": 12.983}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.967008785134}, {"type": "cos_sim_spearman", "value": 80.23142141101837}, {"type": "euclidean_pearson", "value": 81.20166064704539}, {"type": "euclidean_spearman", "value": 80.18961335654585}, {"type": "manhattan_pearson", "value": 81.13925443187625}, {"type": "manhattan_spearman", "value": 80.07948723044424}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.94262461316023}, {"type": "cos_sim_spearman", "value": 80.01596278563865}, {"type": "euclidean_pearson", "value": 83.80799622922581}, {"type": "euclidean_spearman", "value": 79.94984954947103}, {"type": "manhattan_pearson", "value": 83.68473841756281}, {"type": "manhattan_spearman", "value": 79.84990707951822}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.57346443146068}, {"type": "cos_sim_spearman", "value": 81.54689837570866}, {"type": "euclidean_pearson", "value": 81.10909881516007}, {"type": "euclidean_spearman", "value": 81.56746243261762}, {"type": "manhattan_pearson", "value": 80.87076036186582}, {"type": "manhattan_spearman", "value": 81.33074987964402}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.54733787179849}, {"type": "cos_sim_spearman", "value": 77.72202105610411}, {"type": "euclidean_pearson", "value": 78.9043595478849}, {"type": "euclidean_spearman", "value": 77.93422804309435}, {"type": "manhattan_pearson", "value": 78.58115121621368}, {"type": "manhattan_spearman", "value": 77.62508135122033}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.59880017237558}, {"type": "cos_sim_spearman", "value": 89.31088630824758}, {"type": "euclidean_pearson", "value": 88.47069261564656}, {"type": "euclidean_spearman", "value": 89.33581971465233}, {"type": "manhattan_pearson", "value": 88.40774264100956}, {"type": "manhattan_spearman", "value": 89.28657485627835}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.08055117917084}, {"type": "cos_sim_spearman", "value": 85.78491813080304}, {"type": "euclidean_pearson", "value": 84.99329155500392}, {"type": "euclidean_spearman", "value": 85.76728064677287}, {"type": "manhattan_pearson", "value": 84.87947428989587}, {"type": "manhattan_spearman", "value": 85.62429454917464}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ko-ko)", "type": "mteb/sts17-crosslingual-sts", "config": "ko-ko", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.14190939287384}, {"type": "cos_sim_spearman", "value": 82.27331573306041}, {"type": "euclidean_pearson", "value": 81.891896953716}, {"type": "euclidean_spearman", "value": 82.37695542955998}, {"type": "manhattan_pearson", "value": 81.73123869460504}, {"type": "manhattan_spearman", "value": 82.19989168441421}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ar-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "ar-ar", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 76.84695301843362}, {"type": "cos_sim_spearman", "value": 77.87790986014461}, {"type": "euclidean_pearson", "value": 76.91981583106315}, {"type": "euclidean_spearman", "value": 77.88154772749589}, {"type": "manhattan_pearson", "value": 76.94953277451093}, {"type": "manhattan_spearman", "value": 77.80499230728604}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "en-ar", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.44657840482016}, {"type": "cos_sim_spearman", "value": 75.05531095119674}, {"type": "euclidean_pearson", "value": 75.88161755829299}, {"type": "euclidean_spearman", "value": 74.73176238219332}, {"type": "manhattan_pearson", "value": 75.63984765635362}, {"type": "manhattan_spearman", "value": 74.86476440770737}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-de)", "type": "mteb/sts17-crosslingual-sts", "config": "en-de", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.64700140524133}, {"type": "cos_sim_spearman", "value": 86.16014210425672}, {"type": "euclidean_pearson", "value": 86.49086860843221}, {"type": "euclidean_spearman", "value": 86.09729326815614}, {"type": "manhattan_pearson", "value": 86.43406265125513}, {"type": "manhattan_spearman", "value": 86.17740150939994}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.91170098764921}, {"type": "cos_sim_spearman", "value": 88.12437004058931}, {"type": "euclidean_pearson", "value": 88.81828254494437}, {"type": "euclidean_spearman", "value": 88.14831794572122}, {"type": "manhattan_pearson", "value": 88.93442183448961}, {"type": "manhattan_spearman", "value": 88.15254630778304}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-tr)", "type": "mteb/sts17-crosslingual-sts", "config": "en-tr", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 72.91390577997292}, {"type": "cos_sim_spearman", "value": 71.22979457536074}, {"type": "euclidean_pearson", "value": 74.40314008106749}, {"type": "euclidean_spearman", "value": 72.54972136083246}, {"type": "manhattan_pearson", "value": 73.85687539530218}, {"type": "manhattan_spearman", "value": 72.09500771742637}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-en)", "type": "mteb/sts17-crosslingual-sts", "config": "es-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.9301067983089}, {"type": "cos_sim_spearman", "value": 80.74989828346473}, {"type": "euclidean_pearson", "value": 81.36781301814257}, {"type": "euclidean_spearman", "value": 80.9448819964426}, {"type": "manhattan_pearson", "value": 81.0351322685609}, {"type": "manhattan_spearman", "value": 80.70192121844177}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-es)", "type": "mteb/sts17-crosslingual-sts", "config": "es-es", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.13820465980005}, {"type": "cos_sim_spearman", "value": 86.73532498758757}, {"type": "euclidean_pearson", "value": 87.21329451846637}, {"type": "euclidean_spearman", "value": 86.57863198601002}, {"type": "manhattan_pearson", "value": 87.06973713818554}, {"type": "manhattan_spearman", "value": 86.47534918791499}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (fr-en)", "type": "mteb/sts17-crosslingual-sts", "config": "fr-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.48720108904415}, {"type": "cos_sim_spearman", "value": 85.62221757068387}, {"type": "euclidean_pearson", "value": 86.1010129512749}, {"type": "euclidean_spearman", "value": 85.86580966509942}, {"type": "manhattan_pearson", "value": 86.26800938808971}, {"type": "manhattan_spearman", "value": 85.88902721678429}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (it-en)", "type": "mteb/sts17-crosslingual-sts", "config": "it-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.98021347333516}, {"type": "cos_sim_spearman", "value": 84.53806553803501}, {"type": "euclidean_pearson", "value": 84.61483347248364}, {"type": "euclidean_spearman", "value": 85.14191408011702}, {"type": "manhattan_pearson", "value": 84.75297588825967}, {"type": "manhattan_spearman", "value": 85.33176753669242}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (nl-en)", "type": "mteb/sts17-crosslingual-sts", "config": "nl-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.51856644893233}, {"type": "cos_sim_spearman", "value": 85.27510748506413}, {"type": "euclidean_pearson", "value": 85.09886861540977}, {"type": "euclidean_spearman", "value": 85.62579245860887}, {"type": "manhattan_pearson", "value": 84.93017860464607}, {"type": "manhattan_spearman", "value": 85.5063988898453}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 62.581573200584195}, {"type": "cos_sim_spearman", "value": 63.05503590247928}, {"type": "euclidean_pearson", "value": 63.652564812602094}, {"type": "euclidean_spearman", "value": 62.64811520876156}, {"type": "manhattan_pearson", "value": 63.506842893061076}, {"type": "manhattan_spearman", "value": 62.51289573046917}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de)", "type": "mteb/sts22-crosslingual-sts", "config": "de", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 48.2248801729127}, {"type": "cos_sim_spearman", "value": 56.5936604678561}, {"type": "euclidean_pearson", "value": 43.98149464089}, {"type": "euclidean_spearman", "value": 56.108561882423615}, {"type": "manhattan_pearson", "value": 43.86880305903564}, {"type": "manhattan_spearman", "value": 56.04671150510166}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es)", "type": "mteb/sts22-crosslingual-sts", "config": "es", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 55.17564527009831}, {"type": "cos_sim_spearman", "value": 64.57978560979488}, {"type": "euclidean_pearson", "value": 58.8818330154583}, {"type": "euclidean_spearman", "value": 64.99214839071281}, {"type": "manhattan_pearson", "value": 58.72671436121381}, {"type": "manhattan_spearman", "value": 65.10713416616109}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl)", "type": "mteb/sts22-crosslingual-sts", "config": "pl", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 26.772131864023297}, {"type": "cos_sim_spearman", "value": 34.68200792408681}, {"type": "euclidean_pearson", "value": 16.68082419005441}, {"type": "euclidean_spearman", "value": 34.83099932652166}, {"type": "manhattan_pearson", "value": 16.52605949659529}, {"type": "manhattan_spearman", "value": 34.82075801399475}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (tr)", "type": "mteb/sts22-crosslingual-sts", "config": "tr", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 54.42415189043831}, {"type": "cos_sim_spearman", "value": 63.54594264576758}, {"type": "euclidean_pearson", "value": 57.36577498297745}, {"type": "euclidean_spearman", "value": 63.111466379158074}, {"type": "manhattan_pearson", "value": 57.584543715873885}, {"type": "manhattan_spearman", "value": 63.22361054139183}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ar)", "type": "mteb/sts22-crosslingual-sts", "config": "ar", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 47.55216762405518}, {"type": "cos_sim_spearman", "value": 56.98670142896412}, {"type": "euclidean_pearson", "value": 50.15318757562699}, {"type": "euclidean_spearman", "value": 56.524941926541906}, {"type": "manhattan_pearson", "value": 49.955618528674904}, {"type": "manhattan_spearman", "value": 56.37102209240117}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ru)", "type": "mteb/sts22-crosslingual-sts", "config": "ru", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 49.20540980338571}, {"type": "cos_sim_spearman", "value": 59.9009453504406}, {"type": "euclidean_pearson", "value": 49.557749853620535}, {"type": "euclidean_spearman", "value": 59.76631621172456}, {"type": "manhattan_pearson", "value": 49.62340591181147}, {"type": "manhattan_spearman", "value": 59.94224880322436}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh)", "type": "mteb/sts22-crosslingual-sts", "config": "zh", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 51.508169956576985}, {"type": "cos_sim_spearman", "value": 66.82461565306046}, {"type": "euclidean_pearson", "value": 56.2274426480083}, {"type": "euclidean_spearman", "value": 66.6775323848333}, {"type": "manhattan_pearson", "value": 55.98277796300661}, {"type": "manhattan_spearman", "value": 66.63669848497175}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr)", "type": "mteb/sts22-crosslingual-sts", "config": "fr", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 72.86478788045507}, {"type": "cos_sim_spearman", "value": 76.7946552053193}, {"type": "euclidean_pearson", "value": 75.01598530490269}, {"type": "euclidean_spearman", "value": 76.83618917858281}, {"type": "manhattan_pearson", "value": 74.68337628304332}, {"type": "manhattan_spearman", "value": 76.57480204017773}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-en)", "type": "mteb/sts22-crosslingual-sts", "config": "de-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 55.922619099401984}, {"type": "cos_sim_spearman", "value": 56.599362477240774}, {"type": "euclidean_pearson", "value": 56.68307052369783}, {"type": "euclidean_spearman", "value": 54.28760436777401}, {"type": "manhattan_pearson", "value": 56.67763566500681}, {"type": "manhattan_spearman", "value": 53.94619541711359}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-en)", "type": "mteb/sts22-crosslingual-sts", "config": "es-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 66.74357206710913}, {"type": "cos_sim_spearman", "value": 72.5208244925311}, {"type": "euclidean_pearson", "value": 67.49254562186032}, {"type": "euclidean_spearman", "value": 72.02469076238683}, {"type": "manhattan_pearson", "value": 67.45251772238085}, {"type": "manhattan_spearman", "value": 72.05538819984538}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (it)", "type": "mteb/sts22-crosslingual-sts", "config": "it", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 71.25734330033191}, {"type": "cos_sim_spearman", "value": 76.98349083946823}, {"type": "euclidean_pearson", "value": 73.71642838667736}, {"type": "euclidean_spearman", "value": 77.01715504651384}, {"type": "manhattan_pearson", "value": 73.61712711868105}, {"type": "manhattan_spearman", "value": 77.01392571153896}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl-en)", "type": "mteb/sts22-crosslingual-sts", "config": "pl-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 63.18215462781212}, {"type": "cos_sim_spearman", "value": 65.54373266117607}, {"type": "euclidean_pearson", "value": 64.54126095439005}, {"type": "euclidean_spearman", "value": 65.30410369102711}, {"type": "manhattan_pearson", "value": 63.50332221148234}, {"type": "manhattan_spearman", "value": 64.3455878104313}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh-en)", "type": "mteb/sts22-crosslingual-sts", "config": "zh-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 62.30509221440029}, {"type": "cos_sim_spearman", "value": 65.99582704642478}, {"type": "euclidean_pearson", "value": 63.43818859884195}, {"type": "euclidean_spearman", "value": 66.83172582815764}, {"type": "manhattan_pearson", "value": 63.055779168508764}, {"type": "manhattan_spearman", "value": 65.49585020501449}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-it)", "type": "mteb/sts22-crosslingual-sts", "config": "es-it", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 59.587830825340404}, {"type": "cos_sim_spearman", "value": 68.93467614588089}, {"type": "euclidean_pearson", "value": 62.3073527367404}, {"type": "euclidean_spearman", "value": 69.69758171553175}, {"type": "manhattan_pearson", "value": 61.9074580815789}, {"type": "manhattan_spearman", "value": 69.57696375597865}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-fr)", "type": "mteb/sts22-crosslingual-sts", "config": "de-fr", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 57.143220125577066}, {"type": "cos_sim_spearman", "value": 67.78857859159226}, {"type": "euclidean_pearson", "value": 55.58225107923733}, {"type": "euclidean_spearman", "value": 67.80662907184563}, {"type": "manhattan_pearson", "value": 56.24953502726514}, {"type": "manhattan_spearman", "value": 67.98262125431616}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "de-pl", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 21.826928900322066}, {"type": "cos_sim_spearman", "value": 49.578506634400405}, {"type": "euclidean_pearson", "value": 27.939890138843214}, {"type": "euclidean_spearman", "value": 52.71950519136242}, {"type": "manhattan_pearson", "value": 26.39878683847546}, {"type": "manhattan_spearman", "value": 47.54609580342499}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "fr-pl", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 57.27603854632001}, {"type": "cos_sim_spearman", "value": 50.709255283710995}, {"type": "euclidean_pearson", "value": 59.5419024445929}, {"type": "euclidean_spearman", "value": 50.709255283710995}, {"type": "manhattan_pearson", "value": 59.03256832438492}, {"type": "manhattan_spearman", "value": 61.97797868009122}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.00757054859712}, {"type": "cos_sim_spearman", "value": 87.29283629622222}, {"type": "euclidean_pearson", "value": 86.54824171775536}, {"type": "euclidean_spearman", "value": 87.24364730491402}, {"type": "manhattan_pearson", "value": 86.5062156915074}, {"type": "manhattan_spearman", "value": 87.15052170378574}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 82.03549357197389}, {"type": "mrr", "value": 95.05437645143527}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 57.260999999999996}, {"type": "map_at_10", "value": 66.259}, {"type": "map_at_100", "value": 66.884}, {"type": "map_at_1000", "value": 66.912}, {"type": "map_at_3", "value": 63.685}, {"type": "map_at_5", "value": 65.35499999999999}, {"type": "mrr_at_1", "value": 60.333000000000006}, {"type": "mrr_at_10", "value": 67.5}, {"type": "mrr_at_100", "value": 68.013}, {"type": "mrr_at_1000", "value": 68.038}, {"type": "mrr_at_3", "value": 65.61099999999999}, {"type": "mrr_at_5", "value": 66.861}, {"type": "ndcg_at_1", "value": 60.333000000000006}, {"type": "ndcg_at_10", "value": 70.41}, {"type": "ndcg_at_100", "value": 73.10600000000001}, {"type": "ndcg_at_1000", "value": 73.846}, {"type": "ndcg_at_3", "value": 66.133}, {"type": "ndcg_at_5", "value": 68.499}, {"type": "precision_at_1", "value": 60.333000000000006}, {"type": "precision_at_10", "value": 9.232999999999999}, {"type": "precision_at_100", "value": 1.0630000000000002}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 25.667}, {"type": "precision_at_5", "value": 17.067}, {"type": "recall_at_1", "value": 57.260999999999996}, {"type": "recall_at_10", "value": 81.94399999999999}, {"type": "recall_at_100", "value": 93.867}, {"type": "recall_at_1000", "value": 99.667}, {"type": "recall_at_3", "value": 70.339}, {"type": "recall_at_5", "value": 76.25}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.74356435643564}, {"type": "cos_sim_ap", "value": 93.13411948212683}, {"type": "cos_sim_f1", "value": 86.80521991300147}, {"type": "cos_sim_precision", "value": 84.00374181478017}, {"type": "cos_sim_recall", "value": 89.8}, {"type": "dot_accuracy", "value": 99.67920792079208}, {"type": "dot_ap", "value": 89.27277565444479}, {"type": "dot_f1", "value": 83.9276990718124}, {"type": "dot_precision", "value": 82.04393505253104}, {"type": "dot_recall", "value": 85.9}, {"type": "euclidean_accuracy", "value": 99.74257425742574}, {"type": "euclidean_ap", "value": 93.17993008259062}, {"type": "euclidean_f1", "value": 86.69396110542476}, {"type": "euclidean_precision", "value": 88.78406708595388}, {"type": "euclidean_recall", "value": 84.7}, {"type": "manhattan_accuracy", "value": 99.74257425742574}, {"type": "manhattan_ap", "value": 93.14413755550099}, {"type": "manhattan_f1", "value": 86.82483594144371}, {"type": "manhattan_precision", "value": 87.66564729867483}, {"type": "manhattan_recall", "value": 86}, {"type": "max_accuracy", "value": 99.74356435643564}, {"type": "max_ap", "value": 93.17993008259062}, {"type": "max_f1", "value": 86.82483594144371}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 57.525863806168566}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 32.68850574423839}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 49.71580650644033}, {"type": "mrr", "value": 50.50971903913081}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 29.152190498799484}, {"type": "cos_sim_spearman", "value": 29.686180371952727}, {"type": "dot_pearson", "value": 27.248664793816342}, {"type": "dot_spearman", "value": 28.37748983721745}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.20400000000000001}, {"type": "map_at_10", "value": 1.6209999999999998}, {"type": "map_at_100", "value": 9.690999999999999}, {"type": "map_at_1000", "value": 23.733}, {"type": "map_at_3", "value": 0.575}, {"type": "map_at_5", "value": 0.885}, {"type": "mrr_at_1", "value": 78}, {"type": "mrr_at_10", "value": 86.56700000000001}, {"type": "mrr_at_100", "value": 86.56700000000001}, {"type": "mrr_at_1000", "value": 86.56700000000001}, {"type": "mrr_at_3", "value": 85.667}, {"type": "mrr_at_5", "value": 86.56700000000001}, {"type": "ndcg_at_1", "value": 76}, {"type": "ndcg_at_10", "value": 71.326}, {"type": "ndcg_at_100", "value": 54.208999999999996}, {"type": "ndcg_at_1000", "value": 49.252}, {"type": "ndcg_at_3", "value": 74.235}, {"type": "ndcg_at_5", "value": 73.833}, {"type": "precision_at_1", "value": 78}, {"type": "precision_at_10", "value": 74.8}, {"type": "precision_at_100", "value": 55.50000000000001}, {"type": "precision_at_1000", "value": 21.836}, {"type": "precision_at_3", "value": 78}, {"type": "precision_at_5", "value": 78}, {"type": "recall_at_1", "value": 0.20400000000000001}, {"type": "recall_at_10", "value": 1.894}, {"type": "recall_at_100", "value": 13.245999999999999}, {"type": "recall_at_1000", "value": 46.373}, {"type": "recall_at_3", "value": 0.613}, {"type": "recall_at_5", "value": 0.991}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (sqi-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "sqi-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 95.89999999999999}, {"type": "f1", "value": 94.69999999999999}, {"type": "precision", "value": 94.11666666666667}, {"type": "recall", "value": 95.89999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fry-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fry-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 68.20809248554913}, {"type": "f1", "value": 63.431048720066066}, {"type": "precision", "value": 61.69143958161298}, {"type": "recall", "value": 68.20809248554913}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kur-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kur-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 71.21951219512195}, {"type": "f1", "value": 66.82926829268293}, {"type": "precision", "value": 65.1260162601626}, {"type": "recall", "value": 71.21951219512195}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tur-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tur-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 97.2}, {"type": "f1", "value": 96.26666666666667}, {"type": "precision", "value": 95.8}, {"type": "recall", "value": 97.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (deu-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "deu-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 99.3}, {"type": "f1", "value": 99.06666666666666}, {"type": "precision", "value": 98.95}, {"type": "recall", "value": 99.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nld-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nld-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 97.39999999999999}, {"type": "f1", "value": 96.63333333333333}, {"type": "precision", "value": 96.26666666666668}, {"type": "recall", "value": 97.39999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ron-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ron-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 96}, {"type": "f1", "value": 94.86666666666666}, {"type": "precision", "value": 94.31666666666668}, {"type": "recall", "value": 96}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ang-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ang-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 47.01492537313433}, {"type": "f1", "value": 40.178867566927266}, {"type": "precision", "value": 38.179295828549556}, {"type": "recall", "value": 47.01492537313433}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ido-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ido-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 86.5}, {"type": "f1", "value": 83.62537480063796}, {"type": "precision", "value": 82.44555555555554}, {"type": "recall", "value": 86.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (jav-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "jav-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 80.48780487804879}, {"type": "f1", "value": 75.45644599303138}, {"type": "precision", "value": 73.37398373983739}, {"type": "recall", "value": 80.48780487804879}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (isl-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "isl-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 93.7}, {"type": "f1", "value": 91.95666666666666}, {"type": "precision", "value": 91.125}, {"type": "recall", "value": 93.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (slv-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "slv-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 91.73754556500607}, {"type": "f1", "value": 89.65168084244632}, {"type": "precision", "value": 88.73025516403402}, {"type": "recall", "value": 91.73754556500607}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cym-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cym-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 81.04347826086956}, {"type": "f1", "value": 76.2128364389234}, {"type": "precision", "value": 74.2}, {"type": "recall", "value": 81.04347826086956}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kaz-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kaz-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 83.65217391304348}, {"type": "f1", "value": 79.4376811594203}, {"type": "precision", "value": 77.65797101449274}, {"type": "recall", "value": 83.65217391304348}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (est-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "est-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 87.5}, {"type": "f1", "value": 85.02690476190476}, {"type": "precision", "value": 83.96261904761904}, {"type": "recall", "value": 87.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (heb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "heb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 89.3}, {"type": "f1", "value": 86.52333333333333}, {"type": "precision", "value": 85.22833333333332}, {"type": "recall", "value": 89.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (gla-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "gla-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 65.01809408926418}, {"type": "f1", "value": 59.00594446432805}, {"type": "precision", "value": 56.827215807915444}, {"type": "recall", "value": 65.01809408926418}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mar-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mar-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 91.2}, {"type": "f1", "value": 88.58}, {"type": "precision", "value": 87.33333333333334}, {"type": "recall", "value": 91.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 59.199999999999996}, {"type": "f1", "value": 53.299166276284915}, {"type": "precision", "value": 51.3383908045977}, {"type": "recall", "value": 59.199999999999996}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bel-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bel-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 93.2}, {"type": "f1", "value": 91.2}, {"type": "precision", "value": 90.25}, {"type": "recall", "value": 93.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pms-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pms-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 64.76190476190476}, {"type": "f1", "value": 59.867110667110666}, {"type": "precision", "value": 58.07390192653351}, {"type": "recall", "value": 64.76190476190476}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (gle-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "gle-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 76.2}, {"type": "f1", "value": 71.48147546897547}, {"type": "precision", "value": 69.65409090909091}, {"type": "recall", "value": 76.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pes-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pes-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 93.8}, {"type": "f1", "value": 92.14}, {"type": "precision", "value": 91.35833333333333}, {"type": "recall", "value": 93.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nob-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nob-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 97.89999999999999}, {"type": "f1", "value": 97.2}, {"type": "precision", "value": 96.85000000000001}, {"type": "recall", "value": 97.89999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bul-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bul-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.6}, {"type": "f1", "value": 92.93333333333334}, {"type": "precision", "value": 92.13333333333333}, {"type": "recall", "value": 94.6}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cbk-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cbk-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 74.1}, {"type": "f1", "value": 69.14817460317461}, {"type": "precision", "value": 67.2515873015873}, {"type": "recall", "value": 74.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hun-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hun-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 95.19999999999999}, {"type": "f1", "value": 94.01333333333335}, {"type": "precision", "value": 93.46666666666667}, {"type": "recall", "value": 95.19999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (uig-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "uig-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 76.9}, {"type": "f1", "value": 72.07523809523809}, {"type": "precision", "value": 70.19777777777779}, {"type": "recall", "value": 76.9}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (rus-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "rus-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.1}, {"type": "f1", "value": 92.31666666666666}, {"type": "precision", "value": 91.43333333333332}, {"type": "recall", "value": 94.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (spa-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "spa-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 97.8}, {"type": "f1", "value": 97.1}, {"type": "precision", "value": 96.76666666666668}, {"type": "recall", "value": 97.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hye-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hye-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 92.85714285714286}, {"type": "f1", "value": 90.92093441150045}, {"type": "precision", "value": 90.00449236298293}, {"type": "recall", "value": 92.85714285714286}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tel-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tel-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 93.16239316239316}, {"type": "f1", "value": 91.33903133903132}, {"type": "precision", "value": 90.56267806267806}, {"type": "recall", "value": 93.16239316239316}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (afr-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "afr-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 92.4}, {"type": "f1", "value": 90.25666666666666}, {"type": "precision", "value": 89.25833333333334}, {"type": "recall", "value": 92.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mon-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mon-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 90.22727272727272}, {"type": "f1", "value": 87.53030303030303}, {"type": "precision", "value": 86.37121212121211}, {"type": "recall", "value": 90.22727272727272}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (arz-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "arz-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 79.03563941299791}, {"type": "f1", "value": 74.7349505840072}, {"type": "precision", "value": 72.9035639412998}, {"type": "recall", "value": 79.03563941299791}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hrv-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hrv-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 97}, {"type": "f1", "value": 96.15}, {"type": "precision", "value": 95.76666666666668}, {"type": "recall", "value": 97}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nov-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nov-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 76.26459143968872}, {"type": "f1", "value": 71.55642023346303}, {"type": "precision", "value": 69.7544932369835}, {"type": "recall", "value": 76.26459143968872}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (gsw-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "gsw-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 58.119658119658126}, {"type": "f1", "value": 51.65242165242165}, {"type": "precision", "value": 49.41768108434775}, {"type": "recall", "value": 58.119658119658126}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nds-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nds-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 74.3}, {"type": "f1", "value": 69.52055555555555}, {"type": "precision", "value": 67.7574938949939}, {"type": "recall", "value": 74.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ukr-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ukr-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.8}, {"type": "f1", "value": 93.31666666666666}, {"type": "precision", "value": 92.60000000000001}, {"type": "recall", "value": 94.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (uzb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "uzb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 76.63551401869158}, {"type": "f1", "value": 72.35202492211837}, {"type": "precision", "value": 70.60358255451713}, {"type": "recall", "value": 76.63551401869158}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lit-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lit-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 90.4}, {"type": "f1", "value": 88.4811111111111}, {"type": "precision", "value": 87.7452380952381}, {"type": "recall", "value": 90.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ina-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ina-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 95}, {"type": "f1", "value": 93.60666666666667}, {"type": "precision", "value": 92.975}, {"type": "recall", "value": 95}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lfn-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lfn-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 67.2}, {"type": "f1", "value": 63.01595782872099}, {"type": "precision", "value": 61.596587301587306}, {"type": "recall", "value": 67.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (zsm-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "zsm-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 95.7}, {"type": "f1", "value": 94.52999999999999}, {"type": "precision", "value": 94}, {"type": "recall", "value": 95.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ita-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ita-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.6}, {"type": "f1", "value": 93.28999999999999}, {"type": "precision", "value": 92.675}, {"type": "recall", "value": 94.6}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cmn-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cmn-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 96.39999999999999}, {"type": "f1", "value": 95.28333333333333}, {"type": "precision", "value": 94.75}, {"type": "recall", "value": 96.39999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lvs-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lvs-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 91.9}, {"type": "f1", "value": 89.83}, {"type": "precision", "value": 88.92}, {"type": "recall", "value": 91.9}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (glg-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "glg-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.69999999999999}, {"type": "f1", "value": 93.34222222222223}, {"type": "precision", "value": 92.75416666666668}, {"type": "recall", "value": 94.69999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ceb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ceb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 60.333333333333336}, {"type": "f1", "value": 55.31203703703703}, {"type": "precision", "value": 53.39971108326371}, {"type": "recall", "value": 60.333333333333336}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bre-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bre-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 12.9}, {"type": "f1", "value": 11.099861903031458}, {"type": "precision", "value": 10.589187932631877}, {"type": "recall", "value": 12.9}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ben-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ben-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 86.7}, {"type": "f1", "value": 83.0152380952381}, {"type": "precision", "value": 81.37833333333333}, {"type": "recall", "value": 86.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (swg-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "swg-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 63.39285714285714}, {"type": "f1", "value": 56.832482993197274}, {"type": "precision", "value": 54.56845238095237}, {"type": "recall", "value": 63.39285714285714}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (arq-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "arq-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 48.73765093304062}, {"type": "f1", "value": 41.555736920720456}, {"type": "precision", "value": 39.06874531737319}, {"type": "recall", "value": 48.73765093304062}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kab-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kab-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 41.099999999999994}, {"type": "f1", "value": 36.540165945165946}, {"type": "precision", "value": 35.05175685425686}, {"type": "recall", "value": 41.099999999999994}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fra-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fra-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.89999999999999}, {"type": "f1", "value": 93.42333333333333}, {"type": "precision", "value": 92.75833333333333}, {"type": "recall", "value": 94.89999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (por-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "por-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.89999999999999}, {"type": "f1", "value": 93.63333333333334}, {"type": "precision", "value": 93.01666666666665}, {"type": "recall", "value": 94.89999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 77.9}, {"type": "f1", "value": 73.64833333333334}, {"type": "precision", "value": 71.90282106782105}, {"type": "recall", "value": 77.9}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (oci-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "oci-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 59.4}, {"type": "f1", "value": 54.90521367521367}, {"type": "precision", "value": 53.432840025471606}, {"type": "recall", "value": 59.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pol-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pol-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 97.39999999999999}, {"type": "f1", "value": 96.6}, {"type": "precision", "value": 96.2}, {"type": "recall", "value": 97.39999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (war-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "war-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 67.2}, {"type": "f1", "value": 62.25926129426129}, {"type": "precision", "value": 60.408376623376626}, {"type": "recall", "value": 67.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (aze-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "aze-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 90.2}, {"type": "f1", "value": 87.60666666666667}, {"type": "precision", "value": 86.45277777777778}, {"type": "recall", "value": 90.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (vie-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "vie-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 97.7}, {"type": "f1", "value": 97}, {"type": "precision", "value": 96.65}, {"type": "recall", "value": 97.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nno-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nno-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 93.2}, {"type": "f1", "value": 91.39746031746031}, {"type": "precision", "value": 90.6125}, {"type": "recall", "value": 93.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cha-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cha-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 32.11678832116788}, {"type": "f1", "value": 27.210415386260234}, {"type": "precision", "value": 26.20408990846947}, {"type": "recall", "value": 32.11678832116788}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mhr-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mhr-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 8.5}, {"type": "f1", "value": 6.787319277832475}, {"type": "precision", "value": 6.3452094433344435}, {"type": "recall", "value": 8.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (dan-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "dan-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 96.1}, {"type": "f1", "value": 95.08}, {"type": "precision", "value": 94.61666666666667}, {"type": "recall", "value": 96.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ell-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ell-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 95.3}, {"type": "f1", "value": 93.88333333333333}, {"type": "precision", "value": 93.18333333333332}, {"type": "recall", "value": 95.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (amh-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "amh-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 85.11904761904762}, {"type": "f1", "value": 80.69444444444444}, {"type": "precision", "value": 78.72023809523809}, {"type": "recall", "value": 85.11904761904762}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pam-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pam-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.1}, {"type": "f1", "value": 9.276381801735853}, {"type": "precision", "value": 8.798174603174601}, {"type": "recall", "value": 11.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hsb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hsb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 63.56107660455487}, {"type": "f1", "value": 58.70433569191332}, {"type": "precision", "value": 56.896926581464015}, {"type": "recall", "value": 63.56107660455487}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (srp-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "srp-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.69999999999999}, {"type": "f1", "value": 93.10000000000001}, {"type": "precision", "value": 92.35}, {"type": "recall", "value": 94.69999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (epo-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "epo-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 96.8}, {"type": "f1", "value": 96.01222222222222}, {"type": "precision", "value": 95.67083333333332}, {"type": "recall", "value": 96.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kzj-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kzj-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 9.2}, {"type": "f1", "value": 7.911555250305249}, {"type": "precision", "value": 7.631246556216846}, {"type": "recall", "value": 9.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (awa-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "awa-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 77.48917748917748}, {"type": "f1", "value": 72.27375798804371}, {"type": "precision", "value": 70.14430014430013}, {"type": "recall", "value": 77.48917748917748}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fao-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fao-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 77.09923664122137}, {"type": "f1", "value": 72.61541257724463}, {"type": "precision", "value": 70.8998380754106}, {"type": "recall", "value": 77.09923664122137}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mal-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mal-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 98.2532751091703}, {"type": "f1", "value": 97.69529354682193}, {"type": "precision", "value": 97.42843279961184}, {"type": "recall", "value": 98.2532751091703}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ile-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ile-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 82.8}, {"type": "f1", "value": 79.14672619047619}, {"type": "precision", "value": 77.59489247311828}, {"type": "recall", "value": 82.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bos-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bos-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.35028248587571}, {"type": "f1", "value": 92.86252354048965}, {"type": "precision", "value": 92.2080979284369}, {"type": "recall", "value": 94.35028248587571}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cor-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cor-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 8.5}, {"type": "f1", "value": 6.282429263935621}, {"type": "precision", "value": 5.783274240739785}, {"type": "recall", "value": 8.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 92.7}, {"type": "f1", "value": 91.025}, {"type": "precision", "value": 90.30428571428571}, {"type": "recall", "value": 92.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (eus-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "eus-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 81}, {"type": "f1", "value": 77.8232380952381}, {"type": "precision", "value": 76.60194444444444}, {"type": "recall", "value": 81}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (yue-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "yue-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 91}, {"type": "f1", "value": 88.70857142857142}, {"type": "precision", "value": 87.7}, {"type": "recall", "value": 91}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (swe-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "swe-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 96.39999999999999}, {"type": "f1", "value": 95.3}, {"type": "precision", "value": 94.76666666666667}, {"type": "recall", "value": 96.39999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (dtp-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "dtp-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 8.1}, {"type": "f1", "value": 7.001008218834307}, {"type": "precision", "value": 6.708329562594269}, {"type": "recall", "value": 8.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 87.1313672922252}, {"type": "f1", "value": 84.09070598748882}, {"type": "precision", "value": 82.79171454104429}, {"type": "recall", "value": 87.1313672922252}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (jpn-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "jpn-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 96.39999999999999}, {"type": "f1", "value": 95.28333333333333}, {"type": "precision", "value": 94.73333333333332}, {"type": "recall", "value": 96.39999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (csb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "csb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 42.29249011857708}, {"type": "f1", "value": 36.981018542283365}, {"type": "precision", "value": 35.415877813576024}, {"type": "recall", "value": 42.29249011857708}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (xho-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "xho-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 83.80281690140845}, {"type": "f1", "value": 80.86854460093896}, {"type": "precision", "value": 79.60093896713614}, {"type": "recall", "value": 83.80281690140845}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (orv-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "orv-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 45.26946107784431}, {"type": "f1", "value": 39.80235464678088}, {"type": "precision", "value": 38.14342660001342}, {"type": "recall", "value": 45.26946107784431}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ind-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ind-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.3}, {"type": "f1", "value": 92.9}, {"type": "precision", "value": 92.26666666666668}, {"type": "recall", "value": 94.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tuk-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tuk-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 37.93103448275862}, {"type": "f1", "value": 33.15192743764172}, {"type": "precision", "value": 31.57456528146183}, {"type": "recall", "value": 37.93103448275862}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (max-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "max-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 69.01408450704226}, {"type": "f1", "value": 63.41549295774648}, {"type": "precision", "value": 61.342778895595806}, {"type": "recall", "value": 69.01408450704226}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (swh-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "swh-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 76.66666666666667}, {"type": "f1", "value": 71.60705960705961}, {"type": "precision", "value": 69.60683760683762}, {"type": "recall", "value": 76.66666666666667}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hin-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hin-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 95.8}, {"type": "f1", "value": 94.48333333333333}, {"type": "precision", "value": 93.83333333333333}, {"type": "recall", "value": 95.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (dsb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "dsb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 52.81837160751566}, {"type": "f1", "value": 48.435977731384824}, {"type": "precision", "value": 47.11291973845539}, {"type": "recall", "value": 52.81837160751566}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ber-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ber-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 44.9}, {"type": "f1", "value": 38.88962621607783}, {"type": "precision", "value": 36.95936507936508}, {"type": "recall", "value": 44.9}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tam-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tam-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 90.55374592833876}, {"type": "f1", "value": 88.22553125484721}, {"type": "precision", "value": 87.26927252985884}, {"type": "recall", "value": 90.55374592833876}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (slk-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "slk-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.6}, {"type": "f1", "value": 93.13333333333333}, {"type": "precision", "value": 92.45333333333333}, {"type": "recall", "value": 94.6}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tgl-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tgl-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 93.7}, {"type": "f1", "value": 91.99666666666667}, {"type": "precision", "value": 91.26666666666668}, {"type": "recall", "value": 93.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ast-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ast-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 85.03937007874016}, {"type": "f1", "value": 81.75853018372703}, {"type": "precision", "value": 80.34120734908137}, {"type": "recall", "value": 85.03937007874016}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mkd-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mkd-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 88.3}, {"type": "f1", "value": 85.5}, {"type": "precision", "value": 84.25833333333334}, {"type": "recall", "value": 88.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (khm-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "khm-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 65.51246537396122}, {"type": "f1", "value": 60.02297410192148}, {"type": "precision", "value": 58.133467727289236}, {"type": "recall", "value": 65.51246537396122}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ces-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ces-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 96}, {"type": "f1", "value": 94.89}, {"type": "precision", "value": 94.39166666666667}, {"type": "recall", "value": 96}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tzl-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tzl-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 57.692307692307686}, {"type": "f1", "value": 53.162393162393165}, {"type": "precision", "value": 51.70673076923077}, {"type": "recall", "value": 57.692307692307686}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (urd-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "urd-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 91.60000000000001}, {"type": "f1", "value": 89.21190476190475}, {"type": "precision", "value": 88.08666666666667}, {"type": "recall", "value": 91.60000000000001}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ara-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ara-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 88}, {"type": "f1", "value": 85.47}, {"type": "precision", "value": 84.43266233766234}, {"type": "recall", "value": 88}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kor-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kor-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 92.7}, {"type": "f1", "value": 90.64999999999999}, {"type": "precision", "value": 89.68333333333332}, {"type": "recall", "value": 92.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (yid-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "yid-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 80.30660377358491}, {"type": "f1", "value": 76.33044137466307}, {"type": "precision", "value": 74.78970125786164}, {"type": "recall", "value": 80.30660377358491}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fin-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fin-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 96.39999999999999}, {"type": "f1", "value": 95.44}, {"type": "precision", "value": 94.99166666666666}, {"type": "recall", "value": 96.39999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tha-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tha-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 96.53284671532847}, {"type": "f1", "value": 95.37712895377129}, {"type": "precision", "value": 94.7992700729927}, {"type": "recall", "value": 96.53284671532847}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (wuu-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "wuu-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 89}, {"type": "f1", "value": 86.23190476190476}, {"type": "precision", "value": 85.035}, {"type": "recall", "value": 89}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.585}, {"type": "map_at_10", "value": 9.012}, {"type": "map_at_100", "value": 14.027000000000001}, {"type": "map_at_1000", "value": 15.565000000000001}, {"type": "map_at_3", "value": 5.032}, {"type": "map_at_5", "value": 6.657}, {"type": "mrr_at_1", "value": 28.571}, {"type": "mrr_at_10", "value": 45.377}, {"type": "mrr_at_100", "value": 46.119}, {"type": "mrr_at_1000", "value": 46.127}, {"type": "mrr_at_3", "value": 41.156}, {"type": "mrr_at_5", "value": 42.585}, {"type": "ndcg_at_1", "value": 27.551}, {"type": "ndcg_at_10", "value": 23.395}, {"type": "ndcg_at_100", "value": 33.342}, {"type": "ndcg_at_1000", "value": 45.523}, {"type": "ndcg_at_3", "value": 25.158}, {"type": "ndcg_at_5", "value": 23.427}, {"type": "precision_at_1", "value": 28.571}, {"type": "precision_at_10", "value": 21.429000000000002}, {"type": "precision_at_100", "value": 6.714}, {"type": "precision_at_1000", "value": 1.473}, {"type": "precision_at_3", "value": 27.211000000000002}, {"type": "precision_at_5", "value": 24.490000000000002}, {"type": "recall_at_1", "value": 2.585}, {"type": "recall_at_10", "value": 15.418999999999999}, {"type": "recall_at_100", "value": 42.485}, {"type": "recall_at_1000", "value": 79.536}, {"type": "recall_at_3", "value": 6.239999999999999}, {"type": "recall_at_5", "value": 8.996}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 71.3234}, {"type": "ap", "value": 14.361688653847423}, {"type": "f1", "value": 54.819068624319044}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 61.97792869269949}, {"type": "f1", "value": 62.28965628513728}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 38.90540145385218}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.53513739047506}, {"type": "cos_sim_ap", "value": 75.27741586677557}, {"type": "cos_sim_f1", "value": 69.18792902473774}, {"type": "cos_sim_precision", "value": 67.94708725515136}, {"type": "cos_sim_recall", "value": 70.47493403693932}, {"type": "dot_accuracy", "value": 84.7052512368123}, {"type": "dot_ap", "value": 69.36075482849378}, {"type": "dot_f1", "value": 64.44688376631296}, {"type": "dot_precision", "value": 59.92288500793831}, {"type": "dot_recall", "value": 69.70976253298153}, {"type": "euclidean_accuracy", "value": 86.60666388508076}, {"type": "euclidean_ap", "value": 75.47512772621097}, {"type": "euclidean_f1", "value": 69.413872536473}, {"type": "euclidean_precision", "value": 67.39562624254472}, {"type": "euclidean_recall", "value": 71.55672823218997}, {"type": "manhattan_accuracy", "value": 86.52917684925792}, {"type": "manhattan_ap", "value": 75.34000110496703}, {"type": "manhattan_f1", "value": 69.28489190226429}, {"type": "manhattan_precision", "value": 67.24608889992551}, {"type": "manhattan_recall", "value": 71.45118733509234}, {"type": "max_accuracy", "value": 86.60666388508076}, {"type": "max_ap", "value": 75.47512772621097}, {"type": "max_f1", "value": 69.413872536473}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.01695967710637}, {"type": "cos_sim_ap", "value": 85.8298270742901}, {"type": "cos_sim_f1", "value": 78.46988128389272}, {"type": "cos_sim_precision", "value": 74.86017897091722}, {"type": "cos_sim_recall", "value": 82.44533415460425}, {"type": "dot_accuracy", "value": 88.19420188613343}, {"type": "dot_ap", "value": 83.82679165901324}, {"type": "dot_f1", "value": 76.55833777304208}, {"type": "dot_precision", "value": 75.6884875846501}, {"type": "dot_recall", "value": 77.44841392054204}, {"type": "euclidean_accuracy", "value": 89.03054294252338}, {"type": "euclidean_ap", "value": 85.89089555185325}, {"type": "euclidean_f1", "value": 78.62997658079624}, {"type": "euclidean_precision", "value": 74.92329149232914}, {"type": "euclidean_recall", "value": 82.72251308900523}, {"type": "manhattan_accuracy", "value": 89.0266620095471}, {"type": "manhattan_ap", "value": 85.86458997929147}, {"type": "manhattan_f1", "value": 78.50685331000291}, {"type": "manhattan_precision", "value": 74.5499861534201}, {"type": "manhattan_recall", "value": 82.90729904527257}, {"type": "max_accuracy", "value": 89.03054294252338}, {"type": "max_ap", "value": 85.89089555185325}, {"type": "max_f1", "value": 78.62997658079624}]}]}]}
dataset
null
508
zeroshot/gte-large-quant
zeroshot
feature-extraction
[ "transformers", "onnx", "bert", "feature-extraction", "sparse sparsity quantized onnx embeddings int8", "mteb", "en", "license:mit", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-10-15T18:10:53Z
2023-10-22T21:00:09+00:00
12
0
--- language: - en license: mit tags: - sparse sparsity quantized onnx embeddings int8 - mteb model-index: - name: gte-large-quant results: - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 90.27260027646717 - type: cos_sim_spearman value: 87.97790825077952 - type: euclidean_pearson value: 88.42832241523092 - type: euclidean_spearman value: 87.97248644049293 - type: manhattan_pearson value: 88.13802465778512 - type: manhattan_spearman value: 87.43391995202266 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.1416039713116 - type: cos_sim_spearman value: 79.13359419669726 - type: euclidean_pearson value: 83.08042050989465 - type: euclidean_spearman value: 79.31565112619433 - type: manhattan_pearson value: 83.10376638254372 - type: manhattan_spearman value: 79.30772376012946 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.93030439955828 - type: cos_sim_spearman value: 75.98104622572393 - type: euclidean_pearson value: 81.20791722502764 - type: euclidean_spearman value: 75.74595761987686 - type: manhattan_pearson value: 81.23169425598003 - type: manhattan_spearman value: 75.73065403644094 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.6693892097855 - type: cos_sim_spearman value: 87.54973524492165 - type: euclidean_pearson value: 86.55642466103943 - type: euclidean_spearman value: 87.47921340148683 - type: manhattan_pearson value: 86.52043275063926 - type: manhattan_spearman value: 87.43869426658489 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 84.37393784507647 - type: cos_sim_spearman value: 81.98702164762233 - type: euclidean_pearson value: 84.22038158338351 - type: euclidean_spearman value: 81.9872746771322 - type: manhattan_pearson value: 84.21915949674062 - type: manhattan_spearman value: 81.97923386273747 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.34477744314285 - type: cos_sim_spearman value: 88.92669309789463 - type: euclidean_pearson value: 88.20128441166663 - type: euclidean_spearman value: 88.91524205114627 - type: manhattan_pearson value: 88.24425729639415 - type: manhattan_spearman value: 88.97457451709523 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.11827015492467 - type: cos_sim_spearman value: 83.59397157586835 - type: euclidean_pearson value: 82.97284591328044 - type: euclidean_spearman value: 83.74509747941255 - type: manhattan_pearson value: 82.974440264842 - type: manhattan_spearman value: 83.72260506292083 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.29744487677577 - type: cos_sim_spearman value: 88.50799779856109 - type: euclidean_pearson value: 89.0149154609955 - type: euclidean_spearman value: 88.72798794474068 - type: manhattan_pearson value: 89.14318227078863 - type: manhattan_spearman value: 88.98372697017017 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 70.114540107077 - type: cos_sim_spearman value: 69.72244488054433 - type: euclidean_pearson value: 70.03658853094686 - type: euclidean_spearman value: 68.96035610557085 - type: manhattan_pearson value: 69.83707789686764 - type: manhattan_spearman value: 68.71831797289812 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.86664469775837 - type: cos_sim_spearman value: 85.39649452953681 - type: euclidean_pearson value: 85.68509956626748 - type: euclidean_spearman value: 85.50984027606854 - type: manhattan_pearson value: 85.6688745008871 - type: manhattan_spearman value: 85.465201888803 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8079207920792 - type: cos_sim_ap value: 95.62897445718106 - type: cos_sim_f1 value: 90.03083247687564 - type: cos_sim_precision value: 92.60042283298098 - type: cos_sim_recall value: 87.6 - type: dot_accuracy value: 99.67029702970297 - type: dot_ap value: 90.20258347721159 - type: dot_f1 value: 83.06172839506172 - type: dot_precision value: 82.04878048780488 - type: dot_recall value: 84.1 - type: euclidean_accuracy value: 99.80594059405941 - type: euclidean_ap value: 95.53963697283662 - type: euclidean_f1 value: 89.92405063291139 - type: euclidean_precision value: 91.07692307692308 - type: euclidean_recall value: 88.8 - type: manhattan_accuracy value: 99.80594059405941 - type: manhattan_ap value: 95.55714505339634 - type: manhattan_f1 value: 90.06085192697769 - type: manhattan_precision value: 91.35802469135803 - type: manhattan_recall value: 88.8 - type: max_accuracy value: 99.8079207920792 - type: max_ap value: 95.62897445718106 - type: max_f1 value: 90.06085192697769 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.87351731537224 - type: cos_sim_ap value: 72.87360532701162 - type: cos_sim_f1 value: 67.8826895565093 - type: cos_sim_precision value: 61.918225315354505 - type: cos_sim_recall value: 75.11873350923483 - type: dot_accuracy value: 80.15139774691542 - type: dot_ap value: 53.5201503222712 - type: dot_f1 value: 53.42203179614388 - type: dot_precision value: 46.64303996849773 - type: dot_recall value: 62.50659630606861 - type: euclidean_accuracy value: 85.87351731537224 - type: euclidean_ap value: 73.10465263888227 - type: euclidean_f1 value: 68.38209376101516 - type: euclidean_precision value: 61.63948316034739 - type: euclidean_recall value: 76.78100263852242 - type: manhattan_accuracy value: 85.83775406806939 - type: manhattan_ap value: 73.08358693248583 - type: manhattan_f1 value: 68.34053485927829 - type: manhattan_precision value: 61.303163628745025 - type: manhattan_recall value: 77.20316622691293 - type: max_accuracy value: 85.87351731537224 - type: max_ap value: 73.10465263888227 - type: max_f1 value: 68.38209376101516 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.85202002561415 - type: cos_sim_ap value: 85.58170945333845 - type: cos_sim_f1 value: 77.87783280804442 - type: cos_sim_precision value: 75.95140515222482 - type: cos_sim_recall value: 79.90452725592854 - type: dot_accuracy value: 85.29902588582296 - type: dot_ap value: 76.95795800483633 - type: dot_f1 value: 71.30231900452489 - type: dot_precision value: 65.91503267973856 - type: dot_recall value: 77.6485987064983 - type: euclidean_accuracy value: 88.80738929638684 - type: euclidean_ap value: 85.5344499509856 - type: euclidean_f1 value: 77.9805854353285 - type: euclidean_precision value: 75.97312495435624 - type: euclidean_recall value: 80.09701262704034 - type: manhattan_accuracy value: 88.7782822990647 - type: manhattan_ap value: 85.52577812395661 - type: manhattan_f1 value: 77.97958958110746 - type: manhattan_precision value: 74.76510067114094 - type: manhattan_recall value: 81.48290729904527 - type: max_accuracy value: 88.85202002561415 - type: max_ap value: 85.58170945333845 - type: max_f1 value: 77.9805854353285 --- # gte-large-quant This is the quantized (INT8) ONNX variant of the [gte-large](https://huggingface.co/thenlper/gte-large) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization. Current list of sparse and quantized gte ONNX models: | Links | Sparsification Method | | --------------------------------------------------------------------------------------------------- | ---------------------- | | [zeroshot/gte-large-sparse](https://huggingface.co/zeroshot/gte-large-sparse) | Quantization (INT8) & 50% Pruning | | [zeroshot/gte-large-quant](https://huggingface.co/zeroshot/gte-large-quant) | Quantization (INT8) | | [zeroshot/gte-base-sparse](https://huggingface.co/zeroshot/gte-base-sparse) | Quantization (INT8) & 50% Pruning | | [zeroshot/gte-base-quant](https://huggingface.co/zeroshot/gte-base-quant) | Quantization (INT8) | | [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse) | Quantization (INT8) & 50% Pruning | | [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant) | Quantization (INT8) | ```bash pip install -U deepsparse-nightly[sentence_transformers] ``` ```python from deepsparse.sentence_transformers import SentenceTransformer model = SentenceTransformer('zeroshot/gte-large-quant', export=False) # Our sentences we like to encode sentences = ['This framework generates embeddings for each input sentence', 'Sentences are passed as a list of string.', 'The quick brown fox jumps over the lazy dog.'] # Sentences are encoded by calling model.encode() embeddings = model.encode(sentences) # Print the embeddings for sentence, embedding in zip(sentences, embeddings): print("Sentence:", sentence) print("Embedding:", embedding.shape) print("") ``` For further details regarding DeepSparse & Sentence Transformers integration, refer to the [DeepSparse README](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers). For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ). ![;)](https://media.giphy.com/media/bYg33GbNbNIVzSrr84/giphy-downsized-large.gif)
[ "BIOSSES" ]
Non_BioNLP
# gte-large-quant This is the quantized (INT8) ONNX variant of the [gte-large](https://huggingface.co/thenlper/gte-large) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization. Current list of sparse and quantized gte ONNX models: | Links | Sparsification Method | | --------------------------------------------------------------------------------------------------- | ---------------------- | | [zeroshot/gte-large-sparse](https://huggingface.co/zeroshot/gte-large-sparse) | Quantization (INT8) & 50% Pruning | | [zeroshot/gte-large-quant](https://huggingface.co/zeroshot/gte-large-quant) | Quantization (INT8) | | [zeroshot/gte-base-sparse](https://huggingface.co/zeroshot/gte-base-sparse) | Quantization (INT8) & 50% Pruning | | [zeroshot/gte-base-quant](https://huggingface.co/zeroshot/gte-base-quant) | Quantization (INT8) | | [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse) | Quantization (INT8) & 50% Pruning | | [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant) | Quantization (INT8) | ```bash pip install -U deepsparse-nightly[sentence_transformers] ``` ```python from deepsparse.sentence_transformers import SentenceTransformer model = SentenceTransformer('zeroshot/gte-large-quant', export=False) # Our sentences we like to encode sentences = ['This framework generates embeddings for each input sentence', 'Sentences are passed as a list of string.', 'The quick brown fox jumps over the lazy dog.'] # Sentences are encoded by calling model.encode() embeddings = model.encode(sentences) # Print the embeddings for sentence, embedding in zip(sentences, embeddings): print("Sentence:", sentence) print("Embedding:", embedding.shape) print("") ``` For further details regarding DeepSparse & Sentence Transformers integration, refer to the [DeepSparse README](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers). For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ). ![;)](https://media.giphy.com/media/bYg33GbNbNIVzSrr84/giphy-downsized-large.gif)
{"language": ["en"], "license": "mit", "tags": ["sparse sparsity quantized onnx embeddings int8", "mteb"], "model-index": [{"name": "gte-large-quant", "results": [{"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 90.27260027646717}, {"type": "cos_sim_spearman", "value": 87.97790825077952}, {"type": "euclidean_pearson", "value": 88.42832241523092}, {"type": "euclidean_spearman", "value": 87.97248644049293}, {"type": "manhattan_pearson", "value": 88.13802465778512}, {"type": "manhattan_spearman", "value": 87.43391995202266}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.1416039713116}, {"type": "cos_sim_spearman", "value": 79.13359419669726}, {"type": "euclidean_pearson", "value": 83.08042050989465}, {"type": "euclidean_spearman", "value": 79.31565112619433}, {"type": "manhattan_pearson", "value": 83.10376638254372}, {"type": "manhattan_spearman", "value": 79.30772376012946}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.93030439955828}, {"type": "cos_sim_spearman", "value": 75.98104622572393}, {"type": "euclidean_pearson", "value": 81.20791722502764}, {"type": "euclidean_spearman", "value": 75.74595761987686}, {"type": "manhattan_pearson", "value": 81.23169425598003}, {"type": "manhattan_spearman", "value": 75.73065403644094}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.6693892097855}, {"type": "cos_sim_spearman", "value": 87.54973524492165}, {"type": "euclidean_pearson", "value": 86.55642466103943}, {"type": "euclidean_spearman", "value": 87.47921340148683}, {"type": "manhattan_pearson", "value": 86.52043275063926}, {"type": "manhattan_spearman", "value": 87.43869426658489}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.37393784507647}, {"type": "cos_sim_spearman", "value": 81.98702164762233}, {"type": "euclidean_pearson", "value": 84.22038158338351}, {"type": "euclidean_spearman", "value": 81.9872746771322}, {"type": "manhattan_pearson", "value": 84.21915949674062}, {"type": "manhattan_spearman", "value": 81.97923386273747}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.34477744314285}, {"type": "cos_sim_spearman", "value": 88.92669309789463}, {"type": "euclidean_pearson", "value": 88.20128441166663}, {"type": "euclidean_spearman", "value": 88.91524205114627}, {"type": "manhattan_pearson", "value": 88.24425729639415}, {"type": "manhattan_spearman", "value": 88.97457451709523}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.11827015492467}, {"type": "cos_sim_spearman", "value": 83.59397157586835}, {"type": "euclidean_pearson", "value": 82.97284591328044}, {"type": "euclidean_spearman", "value": 83.74509747941255}, {"type": "manhattan_pearson", "value": 82.974440264842}, {"type": "manhattan_spearman", "value": 83.72260506292083}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.29744487677577}, {"type": "cos_sim_spearman", "value": 88.50799779856109}, {"type": "euclidean_pearson", "value": 89.0149154609955}, {"type": "euclidean_spearman", "value": 88.72798794474068}, {"type": "manhattan_pearson", "value": 89.14318227078863}, {"type": "manhattan_spearman", "value": 88.98372697017017}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 70.114540107077}, {"type": "cos_sim_spearman", "value": 69.72244488054433}, {"type": "euclidean_pearson", "value": 70.03658853094686}, {"type": "euclidean_spearman", "value": 68.96035610557085}, {"type": "manhattan_pearson", "value": 69.83707789686764}, {"type": "manhattan_spearman", "value": 68.71831797289812}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.86664469775837}, {"type": "cos_sim_spearman", "value": 85.39649452953681}, {"type": "euclidean_pearson", "value": 85.68509956626748}, {"type": "euclidean_spearman", "value": 85.50984027606854}, {"type": "manhattan_pearson", "value": 85.6688745008871}, {"type": "manhattan_spearman", "value": 85.465201888803}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.8079207920792}, {"type": "cos_sim_ap", "value": 95.62897445718106}, {"type": "cos_sim_f1", "value": 90.03083247687564}, {"type": "cos_sim_precision", "value": 92.60042283298098}, {"type": "cos_sim_recall", "value": 87.6}, {"type": "dot_accuracy", "value": 99.67029702970297}, {"type": "dot_ap", "value": 90.20258347721159}, {"type": "dot_f1", "value": 83.06172839506172}, {"type": "dot_precision", "value": 82.04878048780488}, {"type": "dot_recall", "value": 84.1}, {"type": "euclidean_accuracy", "value": 99.80594059405941}, {"type": "euclidean_ap", "value": 95.53963697283662}, {"type": "euclidean_f1", "value": 89.92405063291139}, {"type": "euclidean_precision", "value": 91.07692307692308}, {"type": "euclidean_recall", "value": 88.8}, {"type": "manhattan_accuracy", "value": 99.80594059405941}, {"type": "manhattan_ap", "value": 95.55714505339634}, {"type": "manhattan_f1", "value": 90.06085192697769}, {"type": "manhattan_precision", "value": 91.35802469135803}, {"type": "manhattan_recall", "value": 88.8}, {"type": "max_accuracy", "value": 99.8079207920792}, {"type": "max_ap", "value": 95.62897445718106}, {"type": "max_f1", "value": 90.06085192697769}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 85.87351731537224}, {"type": "cos_sim_ap", "value": 72.87360532701162}, {"type": "cos_sim_f1", "value": 67.8826895565093}, {"type": "cos_sim_precision", "value": 61.918225315354505}, {"type": "cos_sim_recall", "value": 75.11873350923483}, {"type": "dot_accuracy", "value": 80.15139774691542}, {"type": "dot_ap", "value": 53.5201503222712}, {"type": "dot_f1", "value": 53.42203179614388}, {"type": "dot_precision", "value": 46.64303996849773}, {"type": "dot_recall", "value": 62.50659630606861}, {"type": "euclidean_accuracy", "value": 85.87351731537224}, {"type": "euclidean_ap", "value": 73.10465263888227}, {"type": "euclidean_f1", "value": 68.38209376101516}, {"type": "euclidean_precision", "value": 61.63948316034739}, {"type": "euclidean_recall", "value": 76.78100263852242}, {"type": "manhattan_accuracy", "value": 85.83775406806939}, {"type": "manhattan_ap", "value": 73.08358693248583}, {"type": "manhattan_f1", "value": 68.34053485927829}, {"type": "manhattan_precision", "value": 61.303163628745025}, {"type": "manhattan_recall", "value": 77.20316622691293}, {"type": "max_accuracy", "value": 85.87351731537224}, {"type": "max_ap", "value": 73.10465263888227}, {"type": "max_f1", "value": 68.38209376101516}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.85202002561415}, {"type": "cos_sim_ap", "value": 85.58170945333845}, {"type": "cos_sim_f1", "value": 77.87783280804442}, {"type": "cos_sim_precision", "value": 75.95140515222482}, {"type": "cos_sim_recall", "value": 79.90452725592854}, {"type": "dot_accuracy", "value": 85.29902588582296}, {"type": "dot_ap", "value": 76.95795800483633}, {"type": "dot_f1", "value": 71.30231900452489}, {"type": "dot_precision", "value": 65.91503267973856}, {"type": "dot_recall", "value": 77.6485987064983}, {"type": "euclidean_accuracy", "value": 88.80738929638684}, {"type": "euclidean_ap", "value": 85.5344499509856}, {"type": "euclidean_f1", "value": 77.9805854353285}, {"type": "euclidean_precision", "value": 75.97312495435624}, {"type": "euclidean_recall", "value": 80.09701262704034}, {"type": "manhattan_accuracy", "value": 88.7782822990647}, {"type": "manhattan_ap", "value": 85.52577812395661}, {"type": "manhattan_f1", "value": 77.97958958110746}, {"type": "manhattan_precision", "value": 74.76510067114094}, {"type": "manhattan_recall", "value": 81.48290729904527}, {"type": "max_accuracy", "value": 88.85202002561415}, {"type": "max_ap", "value": 85.58170945333845}, {"type": "max_f1", "value": 77.9805854353285}]}]}]}
dataset
null
509
ostapeno/ft_no_transf_1B_similar10
ostapeno
null
[ "region:us" ]
2023-12-25T20:45:03Z
2023-12-26T17:43:53+00:00
0
0
--- {} --- Number of experts present in the library: 40 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | aeslc_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/aeslc_1_0_0 | lora | | social_i_qa_Generate_the_question_from_the_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora | | math_dataset_algebra__linear_1d_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/math_dataset_algebra__linear_1d_1_0_0 | lora | | ropes_background_new_situation_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora | | glue_qqp_2_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/glue_qqp_2_0_0 | lora | | trivia_qa_rc_1_1_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/trivia_qa_rc_1_1_0 | lora | | wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora | | cos_e_v1_11_explain_why_human | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_explain_why_human | lora | | race_high_Write_a_multi_choice_question_options_given_ | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/race_high_Write_a_multi_choice_question_options_given_ | lora | | quarel_heres_a_story_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/quarel_heres_a_story | lora | | glue_stsb_2_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/glue_stsb_2_0_0 | lora | | niv2_explanation | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_explanation | lora | | sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/sciq_Multiple_Choice | lora | | kilt_tasks_hotpotqa_combining_facts | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/kilt_tasks_hotpotqa_combining_facts | lora | | niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora | | super_glue_multirc_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/super_glue_multirc_1_0_2 | lora | | quartz_use_info_from_paragraph_question | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_use_info_from_paragraph_question | lora | | wiki_hop_original_generate_subject_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora | | anli_r1_0_1_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/anli_r1_0_1_0 | lora | | social_i_qa_Check_if_a_random_answer_is_valid_or_not | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora | | ultrachat_25 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ultrachat_25 | lora | | yelp_polarity_reviews_0_2_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora | | ag_news_subset_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ag_news_subset_1_0_0 | lora | | ropes_prompt_beginning_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | super_glue_rte_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/super_glue_rte_1_0_2 | lora | | web_questions_potential_correct_answer | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_potential_correct_answer | lora | | ropes_plain_bottom_hint_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | | super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora | | wiqa_what_might_be_the_last_step_of_the_process | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_last_step_of_the_process | lora | | ropes_new_situation_background_answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora | | ropes_read_background_situation_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_read_background_situation | lora | | duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora | | ropes_background_situation_middle_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_situation_middle | lora | | app_reviews_generate_review | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/app_reviews_generate_review | lora | | wiki_hop_original_choose_best_object_affirmative_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_affirmative_2 | lora | | quail_description_context_question_answer_id | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quail_description_context_question_answer_id | lora | | wiki_bio_guess_person | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_bio_guess_person | lora | | high_school_psychology | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/high_school_psychology | lora | | wiki_hop_original_generate_object_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | ropes_plain_bottom_hint_v2 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | Last updated on: 2023-12-26 04:09:55+00:00
[ "SCIQ" ]
Non_BioNLP
Number of experts present in the library: 40 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | aeslc_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/aeslc_1_0_0 | lora | | social_i_qa_Generate_the_question_from_the_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora | | math_dataset_algebra__linear_1d_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/math_dataset_algebra__linear_1d_1_0_0 | lora | | ropes_background_new_situation_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora | | glue_qqp_2_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/glue_qqp_2_0_0 | lora | | trivia_qa_rc_1_1_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/trivia_qa_rc_1_1_0 | lora | | wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora | | cos_e_v1_11_explain_why_human | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_explain_why_human | lora | | race_high_Write_a_multi_choice_question_options_given_ | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/race_high_Write_a_multi_choice_question_options_given_ | lora | | quarel_heres_a_story_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/quarel_heres_a_story | lora | | glue_stsb_2_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/glue_stsb_2_0_0 | lora | | niv2_explanation | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_explanation | lora | | sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/sciq_Multiple_Choice | lora | | kilt_tasks_hotpotqa_combining_facts | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/kilt_tasks_hotpotqa_combining_facts | lora | | niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora | | super_glue_multirc_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/super_glue_multirc_1_0_2 | lora | | quartz_use_info_from_paragraph_question | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_use_info_from_paragraph_question | lora | | wiki_hop_original_generate_subject_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora | | anli_r1_0_1_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/anli_r1_0_1_0 | lora | | social_i_qa_Check_if_a_random_answer_is_valid_or_not | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora | | ultrachat_25 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ultrachat_25 | lora | | yelp_polarity_reviews_0_2_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora | | ag_news_subset_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ag_news_subset_1_0_0 | lora | | ropes_prompt_beginning_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | super_glue_rte_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/super_glue_rte_1_0_2 | lora | | web_questions_potential_correct_answer | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_potential_correct_answer | lora | | ropes_plain_bottom_hint_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | | super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora | | wiqa_what_might_be_the_last_step_of_the_process | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_last_step_of_the_process | lora | | ropes_new_situation_background_answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora | | ropes_read_background_situation_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_read_background_situation | lora | | duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora | | ropes_background_situation_middle_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_situation_middle | lora | | app_reviews_generate_review | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/app_reviews_generate_review | lora | | wiki_hop_original_choose_best_object_affirmative_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_affirmative_2 | lora | | quail_description_context_question_answer_id | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quail_description_context_question_answer_id | lora | | wiki_bio_guess_person | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_bio_guess_person | lora | | high_school_psychology | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/high_school_psychology | lora | | wiki_hop_original_generate_object_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | ropes_plain_bottom_hint_v2 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | Last updated on: 2023-12-26 04:09:55+00:00
{}
dataset
null
510
compressa-ai/Compressa-Embeddings
compressa-ai
feature-extraction
[ "sentence-transformers", "safetensors", "mistral", "feature-extraction", "mteb", "transformers", "en", "arxiv:2210.07316", "arxiv:2310.06825", "arxiv:2401.00368", "arxiv:2104.08663", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-10-28T20:17:52Z
2024-10-28T20:49:10+00:00
7
1
--- language: - en license: cc-by-nc-4.0 tags: - mteb - sentence-transformers - transformers model-index: - name: SFR-Embedding-Mistral results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.92537313432834 - type: ap value: 40.86767661556651 - type: f1 value: 71.65758897929837 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 95.967 - type: ap value: 94.46300829592593 - type: f1 value: 95.96507173189292 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 54.352000000000004 - type: f1 value: 53.636682615380174 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: ndcg_at_1 value: 43.314 - type: ndcg_at_2 value: 54.757 - type: ndcg_at_3 value: 58.84700000000001 - type: ndcg_at_5 value: 63.634 - type: ndcg_at_7 value: 65.741 - type: ndcg_at_10 value: 67.171 - type: ndcg_at_20 value: 68.585 - type: ndcg_at_30 value: 68.81 - type: ndcg_at_50 value: 68.932 - type: ndcg_at_70 value: 68.992 - type: ndcg_at_100 value: 69.014 - type: ndcg_at_200 value: 69.014 - type: ndcg_at_300 value: 69.014 - type: ndcg_at_500 value: 69.014 - type: ndcg_at_700 value: 69.014 - type: ndcg_at_1000 value: 69.014 - type: map_at_1 value: 43.314 - type: map_at_2 value: 52.383 - type: map_at_3 value: 55.108999999999995 - type: map_at_5 value: 57.772999999999996 - type: map_at_7 value: 58.718 - type: map_at_10 value: 59.256 - type: map_at_20 value: 59.668 - type: map_at_30 value: 59.709999999999994 - type: map_at_50 value: 59.727 - type: map_at_70 value: 59.733999999999995 - type: map_at_100 value: 59.73500000000001 - type: map_at_200 value: 59.73500000000001 - type: map_at_300 value: 59.73500000000001 - type: map_at_500 value: 59.73500000000001 - type: map_at_700 value: 59.73500000000001 - type: map_at_1000 value: 59.73500000000001 - type: recall_at_1 value: 43.314 - type: recall_at_2 value: 61.451 - type: recall_at_3 value: 69.63000000000001 - type: recall_at_5 value: 81.223 - type: recall_at_7 value: 87.33999999999999 - type: recall_at_10 value: 92.034 - type: recall_at_20 value: 97.44 - type: recall_at_30 value: 98.506 - type: recall_at_50 value: 99.14699999999999 - type: recall_at_70 value: 99.502 - type: recall_at_100 value: 99.644 - type: recall_at_200 value: 99.644 - type: recall_at_300 value: 99.644 - type: recall_at_500 value: 99.644 - type: recall_at_700 value: 99.644 - type: recall_at_1000 value: 99.644 - type: precision_at_1 value: 43.314 - type: precision_at_2 value: 30.725 - type: precision_at_3 value: 23.21 - type: precision_at_5 value: 16.245 - type: precision_at_7 value: 12.477 - type: precision_at_10 value: 9.203 - type: precision_at_20 value: 4.872 - type: precision_at_30 value: 3.2840000000000003 - type: precision_at_50 value: 1.983 - type: precision_at_70 value: 1.421 - type: precision_at_100 value: 0.996 - type: precision_at_200 value: 0.498 - type: precision_at_300 value: 0.332 - type: precision_at_500 value: 0.199 - type: precision_at_700 value: 0.14200000000000002 - type: precision_at_1000 value: 0.1 - type: mrr_at_1 value: 44.666 - type: mrr_at_2 value: 52.418 - type: mrr_at_3 value: 55.595000000000006 - type: mrr_at_5 value: 58.205 - type: mrr_at_7 value: 59.202999999999996 - type: mrr_at_10 value: 59.727 - type: mrr_at_20 value: 60.133 - type: mrr_at_30 value: 60.178 - type: mrr_at_50 value: 60.192 - type: mrr_at_70 value: 60.19799999999999 - type: mrr_at_100 value: 60.199999999999996 - type: mrr_at_200 value: 60.199999999999996 - type: mrr_at_300 value: 60.199999999999996 - type: mrr_at_500 value: 60.199999999999996 - type: mrr_at_700 value: 60.199999999999996 - type: mrr_at_1000 value: 60.199999999999996 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 52.07508593014336 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 47.381339333240675 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.58376647859171 - type: mrr value: 80.56885635140483 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.40107280274783 - type: cos_sim_spearman value: 86.07003345325681 - type: euclidean_pearson value: 87.1726034325395 - type: euclidean_spearman value: 86.07003345325681 - type: manhattan_pearson value: 87.25660625029772 - type: manhattan_spearman value: 86.3808839096893 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 88.81168831168831 - type: f1 value: 88.76514496560141 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 43.9382520874344 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 41.14351847240913 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: ndcg_at_1 value: 34.51166666666667 - type: ndcg_at_2 value: 38.51591666666667 - type: ndcg_at_3 value: 40.95083333333333 - type: ndcg_at_5 value: 43.580666666666666 - type: ndcg_at_7 value: 45.0625 - type: ndcg_at_10 value: 46.49083333333333 - type: ndcg_at_20 value: 48.731333333333325 - type: ndcg_at_30 value: 49.78666666666667 - type: ndcg_at_50 value: 50.84049999999999 - type: ndcg_at_70 value: 51.393750000000004 - type: ndcg_at_100 value: 51.883333333333326 - type: ndcg_at_200 value: 52.65225 - type: ndcg_at_300 value: 52.98241666666669 - type: ndcg_at_500 value: 53.28541666666668 - type: ndcg_at_700 value: 53.49241666666668 - type: ndcg_at_1000 value: 53.63758333333334 - type: map_at_1 value: 29.10075 - type: map_at_2 value: 34.636500000000005 - type: map_at_3 value: 36.92033333333333 - type: map_at_5 value: 38.81641666666666 - type: map_at_7 value: 39.635416666666664 - type: map_at_10 value: 40.294583333333335 - type: map_at_20 value: 41.07574999999999 - type: map_at_30 value: 41.333 - type: map_at_50 value: 41.529333333333334 - type: map_at_70 value: 41.606833333333334 - type: map_at_100 value: 41.66224999999999 - type: map_at_200 value: 41.72691666666666 - type: map_at_300 value: 41.746583333333334 - type: map_at_500 value: 41.75983333333333 - type: map_at_700 value: 41.76558333333333 - type: map_at_1000 value: 41.769000000000005 - type: recall_at_1 value: 29.10075 - type: recall_at_2 value: 39.07658333333333 - type: recall_at_3 value: 44.93591666666667 - type: recall_at_5 value: 51.66883333333333 - type: recall_at_7 value: 55.881000000000014 - type: recall_at_10 value: 60.34691666666667 - type: recall_at_20 value: 68.44016666666667 - type: recall_at_30 value: 72.90766666666667 - type: recall_at_50 value: 77.843 - type: recall_at_70 value: 80.70366666666668 - type: recall_at_100 value: 83.42866666666667 - type: recall_at_200 value: 88.06816666666668 - type: recall_at_300 value: 90.249 - type: recall_at_500 value: 92.37616666666668 - type: recall_at_700 value: 93.978 - type: recall_at_1000 value: 95.12791666666666 - type: precision_at_1 value: 34.51166666666667 - type: precision_at_2 value: 24.326333333333327 - type: precision_at_3 value: 19.099249999999998 - type: precision_at_5 value: 13.672666666666666 - type: precision_at_7 value: 10.772 - type: precision_at_10 value: 8.302166666666668 - type: precision_at_20 value: 4.8960833333333325 - type: precision_at_30 value: 3.551083333333333 - type: precision_at_50 value: 2.3386666666666662 - type: precision_at_70 value: 1.7605833333333334 - type: precision_at_100 value: 1.2965 - type: precision_at_200 value: 0.7106666666666668 - type: precision_at_300 value: 0.4955 - type: precision_at_500 value: 0.3106666666666667 - type: precision_at_700 value: 0.22791666666666668 - type: precision_at_1000 value: 0.1635833333333333 - type: mrr_at_1 value: 34.51166666666667 - type: mrr_at_2 value: 39.954249999999995 - type: mrr_at_3 value: 41.93741666666668 - type: mrr_at_5 value: 43.487166666666674 - type: mrr_at_7 value: 44.14983333333333 - type: mrr_at_10 value: 44.62766666666666 - type: mrr_at_20 value: 45.15291666666668 - type: mrr_at_30 value: 45.317 - type: mrr_at_50 value: 45.42875 - type: mrr_at_70 value: 45.46966666666667 - type: mrr_at_100 value: 45.49716666666667 - type: mrr_at_200 value: 45.525166666666664 - type: mrr_at_300 value: 45.53233333333335 - type: mrr_at_500 value: 45.5365 - type: mrr_at_700 value: 45.538583333333335 - type: mrr_at_1000 value: 45.539583333333326 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: ndcg_at_1 value: 35.179 - type: ndcg_at_2 value: 31.243 - type: ndcg_at_3 value: 30.562 - type: ndcg_at_5 value: 32.409 - type: ndcg_at_7 value: 34.525 - type: ndcg_at_10 value: 36.415 - type: ndcg_at_20 value: 39.443 - type: ndcg_at_30 value: 40.796 - type: ndcg_at_50 value: 42.16 - type: ndcg_at_70 value: 42.971 - type: ndcg_at_100 value: 43.691 - type: ndcg_at_200 value: 45.004 - type: ndcg_at_300 value: 45.527 - type: ndcg_at_500 value: 46.072 - type: ndcg_at_700 value: 46.387 - type: ndcg_at_1000 value: 46.663 - type: map_at_1 value: 15.692 - type: map_at_2 value: 20.116 - type: map_at_3 value: 22.6 - type: map_at_5 value: 24.701 - type: map_at_7 value: 25.934 - type: map_at_10 value: 26.843 - type: map_at_20 value: 27.975 - type: map_at_30 value: 28.372000000000003 - type: map_at_50 value: 28.671000000000003 - type: map_at_70 value: 28.803 - type: map_at_100 value: 28.895 - type: map_at_200 value: 29.011 - type: map_at_300 value: 29.042 - type: map_at_500 value: 29.065 - type: map_at_700 value: 29.075 - type: map_at_1000 value: 29.081000000000003 - type: recall_at_1 value: 15.692 - type: recall_at_2 value: 22.602 - type: recall_at_3 value: 27.814 - type: recall_at_5 value: 33.756 - type: recall_at_7 value: 38.073 - type: recall_at_10 value: 42.553000000000004 - type: recall_at_20 value: 51.121 - type: recall_at_30 value: 55.523999999999994 - type: recall_at_50 value: 60.586 - type: recall_at_70 value: 63.94 - type: recall_at_100 value: 67.134 - type: recall_at_200 value: 73.543 - type: recall_at_300 value: 76.372 - type: recall_at_500 value: 79.60199999999999 - type: recall_at_700 value: 81.536 - type: recall_at_1000 value: 83.37400000000001 - type: precision_at_1 value: 35.179 - type: precision_at_2 value: 27.199 - type: precision_at_3 value: 22.953000000000003 - type: precision_at_5 value: 17.224999999999998 - type: precision_at_7 value: 14.238999999999999 - type: precision_at_10 value: 11.303 - type: precision_at_20 value: 6.954000000000001 - type: precision_at_30 value: 5.116 - type: precision_at_50 value: 3.395 - type: precision_at_70 value: 2.579 - type: precision_at_100 value: 1.9109999999999998 - type: precision_at_200 value: 1.065 - type: precision_at_300 value: 0.743 - type: precision_at_500 value: 0.46699999999999997 - type: precision_at_700 value: 0.344 - type: precision_at_1000 value: 0.247 - type: mrr_at_1 value: 35.179 - type: mrr_at_2 value: 41.792 - type: mrr_at_3 value: 44.484 - type: mrr_at_5 value: 46.39 - type: mrr_at_7 value: 47.125 - type: mrr_at_10 value: 47.711999999999996 - type: mrr_at_20 value: 48.214 - type: mrr_at_30 value: 48.325 - type: mrr_at_50 value: 48.392 - type: mrr_at_70 value: 48.418 - type: mrr_at_100 value: 48.44 - type: mrr_at_200 value: 48.46 - type: mrr_at_300 value: 48.461999999999996 - type: mrr_at_500 value: 48.466 - type: mrr_at_700 value: 48.466 - type: mrr_at_1000 value: 48.467 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: ndcg_at_1 value: 62.375 - type: ndcg_at_2 value: 56.286 - type: ndcg_at_3 value: 53.665 - type: ndcg_at_5 value: 51.139 - type: ndcg_at_7 value: 49.873 - type: ndcg_at_10 value: 49.056 - type: ndcg_at_20 value: 48.783 - type: ndcg_at_30 value: 49.166 - type: ndcg_at_50 value: 51.141999999999996 - type: ndcg_at_70 value: 52.774 - type: ndcg_at_100 value: 54.403 - type: ndcg_at_200 value: 57.419 - type: ndcg_at_300 value: 58.778 - type: ndcg_at_500 value: 60.228 - type: ndcg_at_700 value: 61.07599999999999 - type: ndcg_at_1000 value: 61.846000000000004 - type: map_at_1 value: 10.359 - type: map_at_2 value: 14.446 - type: map_at_3 value: 16.689 - type: map_at_5 value: 20.096 - type: map_at_7 value: 22.247 - type: map_at_10 value: 24.468999999999998 - type: map_at_20 value: 28.938000000000002 - type: map_at_30 value: 31.134 - type: map_at_50 value: 33.403 - type: map_at_70 value: 34.486 - type: map_at_100 value: 35.337 - type: map_at_200 value: 36.364999999999995 - type: map_at_300 value: 36.735 - type: map_at_500 value: 37.057 - type: map_at_700 value: 37.225 - type: map_at_1000 value: 37.379 - type: recall_at_1 value: 10.359 - type: recall_at_2 value: 14.945 - type: recall_at_3 value: 17.694 - type: recall_at_5 value: 22.677 - type: recall_at_7 value: 26.131 - type: recall_at_10 value: 30.053 - type: recall_at_20 value: 39.518 - type: recall_at_30 value: 44.925 - type: recall_at_50 value: 52.154 - type: recall_at_70 value: 56.729 - type: recall_at_100 value: 61.18900000000001 - type: recall_at_200 value: 70.407 - type: recall_at_300 value: 74.412 - type: recall_at_500 value: 78.891 - type: recall_at_700 value: 81.74 - type: recall_at_1000 value: 84.253 - type: precision_at_1 value: 75 - type: precision_at_2 value: 64.125 - type: precision_at_3 value: 57.833 - type: precision_at_5 value: 50.24999999999999 - type: precision_at_7 value: 44.75 - type: precision_at_10 value: 39.75 - type: precision_at_20 value: 30.412 - type: precision_at_30 value: 25.141999999999996 - type: precision_at_50 value: 19.2 - type: precision_at_70 value: 15.729000000000001 - type: precision_at_100 value: 12.552 - type: precision_at_200 value: 7.866 - type: precision_at_300 value: 5.9270000000000005 - type: precision_at_500 value: 4.1129999999999995 - type: precision_at_700 value: 3.2460000000000004 - type: precision_at_1000 value: 2.5260000000000002 - type: mrr_at_1 value: 75 - type: mrr_at_2 value: 78.625 - type: mrr_at_3 value: 79.708 - type: mrr_at_5 value: 80.446 - type: mrr_at_7 value: 80.862 - type: mrr_at_10 value: 81.161 - type: mrr_at_20 value: 81.3 - type: mrr_at_30 value: 81.348 - type: mrr_at_50 value: 81.361 - type: mrr_at_70 value: 81.361 - type: mrr_at_100 value: 81.361 - type: mrr_at_200 value: 81.367 - type: mrr_at_300 value: 81.367 - type: mrr_at_500 value: 81.368 - type: mrr_at_700 value: 81.368 - type: mrr_at_1000 value: 81.368 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 50.239999999999995 - type: f1 value: 46.42361822342044 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: ndcg_at_1 value: 83.723 - type: ndcg_at_2 value: 86.777 - type: ndcg_at_3 value: 87.997 - type: ndcg_at_5 value: 88.864 - type: ndcg_at_7 value: 89.143 - type: ndcg_at_10 value: 89.349 - type: ndcg_at_20 value: 89.709 - type: ndcg_at_30 value: 89.82900000000001 - type: ndcg_at_50 value: 89.923 - type: ndcg_at_70 value: 89.982 - type: ndcg_at_100 value: 90.026 - type: ndcg_at_200 value: 90.10000000000001 - type: ndcg_at_300 value: 90.12599999999999 - type: ndcg_at_500 value: 90.17399999999999 - type: ndcg_at_700 value: 90.19 - type: ndcg_at_1000 value: 90.208 - type: map_at_1 value: 77.64999999999999 - type: map_at_2 value: 83.769 - type: map_at_3 value: 85.041 - type: map_at_5 value: 85.736 - type: map_at_7 value: 85.924 - type: map_at_10 value: 86.032 - type: map_at_20 value: 86.177 - type: map_at_30 value: 86.213 - type: map_at_50 value: 86.233 - type: map_at_70 value: 86.24300000000001 - type: map_at_100 value: 86.249 - type: map_at_200 value: 86.256 - type: map_at_300 value: 86.258 - type: map_at_500 value: 86.26 - type: map_at_700 value: 86.26 - type: map_at_1000 value: 86.261 - type: recall_at_1 value: 77.64999999999999 - type: recall_at_2 value: 88.53999999999999 - type: recall_at_3 value: 91.696 - type: recall_at_5 value: 93.916 - type: recall_at_7 value: 94.731 - type: recall_at_10 value: 95.318 - type: recall_at_20 value: 96.507 - type: recall_at_30 value: 96.956 - type: recall_at_50 value: 97.34899999999999 - type: recall_at_70 value: 97.61 - type: recall_at_100 value: 97.83 - type: recall_at_200 value: 98.223 - type: recall_at_300 value: 98.374 - type: recall_at_500 value: 98.67899999999999 - type: recall_at_700 value: 98.787 - type: recall_at_1000 value: 98.919 - type: precision_at_1 value: 83.723 - type: precision_at_2 value: 48.425000000000004 - type: precision_at_3 value: 33.638 - type: precision_at_5 value: 20.843 - type: precision_at_7 value: 15.079 - type: precision_at_10 value: 10.674999999999999 - type: precision_at_20 value: 5.457999999999999 - type: precision_at_30 value: 3.6740000000000004 - type: precision_at_50 value: 2.2239999999999998 - type: precision_at_70 value: 1.599 - type: precision_at_100 value: 1.125 - type: precision_at_200 value: 0.5680000000000001 - type: precision_at_300 value: 0.38 - type: precision_at_500 value: 0.22999999999999998 - type: precision_at_700 value: 0.165 - type: precision_at_1000 value: 0.116 - type: mrr_at_1 value: 83.723 - type: mrr_at_2 value: 88.794 - type: mrr_at_3 value: 89.679 - type: mrr_at_5 value: 90.049 - type: mrr_at_7 value: 90.129 - type: mrr_at_10 value: 90.167 - type: mrr_at_20 value: 90.208 - type: mrr_at_30 value: 90.214 - type: mrr_at_50 value: 90.217 - type: mrr_at_70 value: 90.218 - type: mrr_at_100 value: 90.21900000000001 - type: mrr_at_200 value: 90.21900000000001 - type: mrr_at_300 value: 90.21900000000001 - type: mrr_at_500 value: 90.21900000000001 - type: mrr_at_700 value: 90.21900000000001 - type: mrr_at_1000 value: 90.21900000000001 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: ndcg_at_1 value: 59.721999999999994 - type: ndcg_at_2 value: 56.85 - type: ndcg_at_3 value: 56.462999999999994 - type: ndcg_at_5 value: 57.75599999999999 - type: ndcg_at_7 value: 59.109 - type: ndcg_at_10 value: 60.402 - type: ndcg_at_20 value: 63.071999999999996 - type: ndcg_at_30 value: 64.302 - type: ndcg_at_50 value: 65.619 - type: ndcg_at_70 value: 66.161 - type: ndcg_at_100 value: 66.645 - type: ndcg_at_200 value: 67.353 - type: ndcg_at_300 value: 67.646 - type: ndcg_at_500 value: 67.852 - type: ndcg_at_700 value: 67.974 - type: ndcg_at_1000 value: 68.084 - type: map_at_1 value: 31.56 - type: map_at_2 value: 42.093 - type: map_at_3 value: 46.177 - type: map_at_5 value: 49.78 - type: map_at_7 value: 51.410999999999994 - type: map_at_10 value: 52.524 - type: map_at_20 value: 53.815000000000005 - type: map_at_30 value: 54.201 - type: map_at_50 value: 54.531 - type: map_at_70 value: 54.625 - type: map_at_100 value: 54.686 - type: map_at_200 value: 54.757999999999996 - type: map_at_300 value: 54.776 - type: map_at_500 value: 54.786 - type: map_at_700 value: 54.790000000000006 - type: map_at_1000 value: 54.793000000000006 - type: recall_at_1 value: 31.56 - type: recall_at_2 value: 44.858 - type: recall_at_3 value: 51.11 - type: recall_at_5 value: 58.394 - type: recall_at_7 value: 63.001 - type: recall_at_10 value: 66.81200000000001 - type: recall_at_20 value: 74.901 - type: recall_at_30 value: 79.218 - type: recall_at_50 value: 84.49 - type: recall_at_70 value: 87.003 - type: recall_at_100 value: 89.345 - type: recall_at_200 value: 93.173 - type: recall_at_300 value: 94.906 - type: recall_at_500 value: 96.223 - type: recall_at_700 value: 97.043 - type: recall_at_1000 value: 97.785 - type: precision_at_1 value: 59.721999999999994 - type: precision_at_2 value: 46.682 - type: precision_at_3 value: 37.602999999999994 - type: precision_at_5 value: 27.500000000000004 - type: precision_at_7 value: 21.847 - type: precision_at_10 value: 16.667 - type: precision_at_20 value: 9.545 - type: precision_at_30 value: 6.795 - type: precision_at_50 value: 4.38 - type: precision_at_70 value: 3.221 - type: precision_at_100 value: 2.319 - type: precision_at_200 value: 1.2149999999999999 - type: precision_at_300 value: 0.827 - type: precision_at_500 value: 0.504 - type: precision_at_700 value: 0.364 - type: precision_at_1000 value: 0.257 - type: mrr_at_1 value: 59.721999999999994 - type: mrr_at_2 value: 64.506 - type: mrr_at_3 value: 65.792 - type: mrr_at_5 value: 66.965 - type: mrr_at_7 value: 67.34700000000001 - type: mrr_at_10 value: 67.57 - type: mrr_at_20 value: 67.896 - type: mrr_at_30 value: 68.008 - type: mrr_at_50 value: 68.083 - type: mrr_at_70 value: 68.105 - type: mrr_at_100 value: 68.116 - type: mrr_at_200 value: 68.12700000000001 - type: mrr_at_300 value: 68.13 - type: mrr_at_500 value: 68.132 - type: mrr_at_700 value: 68.133 - type: mrr_at_1000 value: 68.133 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: ndcg_at_1 value: 81.796 - type: ndcg_at_2 value: 67.999 - type: ndcg_at_3 value: 72.15599999999999 - type: ndcg_at_5 value: 74.99900000000001 - type: ndcg_at_7 value: 76.179 - type: ndcg_at_10 value: 77.022 - type: ndcg_at_20 value: 78.173 - type: ndcg_at_30 value: 78.648 - type: ndcg_at_50 value: 79.104 - type: ndcg_at_70 value: 79.335 - type: ndcg_at_100 value: 79.56 - type: ndcg_at_200 value: 79.911 - type: ndcg_at_300 value: 80.045 - type: ndcg_at_500 value: 80.19500000000001 - type: ndcg_at_700 value: 80.281 - type: ndcg_at_1000 value: 80.35 - type: map_at_1 value: 40.898 - type: map_at_2 value: 62.016000000000005 - type: map_at_3 value: 66.121 - type: map_at_5 value: 68.471 - type: map_at_7 value: 69.261 - type: map_at_10 value: 69.738 - type: map_at_20 value: 70.208 - type: map_at_30 value: 70.343 - type: map_at_50 value: 70.43700000000001 - type: map_at_70 value: 70.47099999999999 - type: map_at_100 value: 70.498 - type: map_at_200 value: 70.526 - type: map_at_300 value: 70.533 - type: map_at_500 value: 70.538 - type: map_at_700 value: 70.541 - type: map_at_1000 value: 70.542 - type: recall_at_1 value: 40.898 - type: recall_at_2 value: 63.964 - type: recall_at_3 value: 70.743 - type: recall_at_5 value: 76.36699999999999 - type: recall_at_7 value: 79.142 - type: recall_at_10 value: 81.404 - type: recall_at_20 value: 85.111 - type: recall_at_30 value: 86.92800000000001 - type: recall_at_50 value: 88.899 - type: recall_at_70 value: 90.01400000000001 - type: recall_at_100 value: 91.19500000000001 - type: recall_at_200 value: 93.234 - type: recall_at_300 value: 94.105 - type: recall_at_500 value: 95.159 - type: recall_at_700 value: 95.8 - type: recall_at_1000 value: 96.34700000000001 - type: precision_at_1 value: 81.796 - type: precision_at_2 value: 63.964 - type: precision_at_3 value: 47.162 - type: precision_at_5 value: 30.547 - type: precision_at_7 value: 22.612 - type: precision_at_10 value: 16.281000000000002 - type: precision_at_20 value: 8.511000000000001 - type: precision_at_30 value: 5.795 - type: precision_at_50 value: 3.556 - type: precision_at_70 value: 2.572 - type: precision_at_100 value: 1.8239999999999998 - type: precision_at_200 value: 0.932 - type: precision_at_300 value: 0.627 - type: precision_at_500 value: 0.381 - type: precision_at_700 value: 0.27399999999999997 - type: precision_at_1000 value: 0.193 - type: mrr_at_1 value: 81.796 - type: mrr_at_2 value: 85.69200000000001 - type: mrr_at_3 value: 86.52 - type: mrr_at_5 value: 86.973 - type: mrr_at_7 value: 87.13300000000001 - type: mrr_at_10 value: 87.208 - type: mrr_at_20 value: 87.303 - type: mrr_at_30 value: 87.32799999999999 - type: mrr_at_50 value: 87.347 - type: mrr_at_70 value: 87.35199999999999 - type: mrr_at_100 value: 87.355 - type: mrr_at_200 value: 87.357 - type: mrr_at_300 value: 87.357 - type: mrr_at_500 value: 87.358 - type: mrr_at_700 value: 87.358 - type: mrr_at_1000 value: 87.358 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 94.79200000000002 - type: ap value: 92.54484356773553 - type: f1 value: 94.78965313682525 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: ndcg_at_1 value: 24.398 - type: ndcg_at_2 value: 31.336000000000002 - type: ndcg_at_3 value: 35.266999999999996 - type: ndcg_at_5 value: 39.356 - type: ndcg_at_7 value: 41.562 - type: ndcg_at_10 value: 43.408 - type: ndcg_at_20 value: 46.107 - type: ndcg_at_30 value: 47.164 - type: ndcg_at_50 value: 48.126000000000005 - type: ndcg_at_70 value: 48.626999999999995 - type: ndcg_at_100 value: 49.043 - type: ndcg_at_200 value: 49.575 - type: ndcg_at_300 value: 49.794 - type: ndcg_at_500 value: 49.942 - type: ndcg_at_700 value: 50.014 - type: ndcg_at_1000 value: 50.077000000000005 - type: map_at_1 value: 23.723 - type: map_at_2 value: 29.593000000000004 - type: map_at_3 value: 32.273 - type: map_at_5 value: 34.587 - type: map_at_7 value: 35.589999999999996 - type: map_at_10 value: 36.296 - type: map_at_20 value: 37.059999999999995 - type: map_at_30 value: 37.265 - type: map_at_50 value: 37.402 - type: map_at_70 value: 37.454 - type: map_at_100 value: 37.486999999999995 - type: map_at_200 value: 37.516 - type: map_at_300 value: 37.524 - type: map_at_500 value: 37.528 - type: map_at_700 value: 37.529 - type: map_at_1000 value: 37.53 - type: recall_at_1 value: 23.723 - type: recall_at_2 value: 35.355 - type: recall_at_3 value: 43.22 - type: recall_at_5 value: 53.025 - type: recall_at_7 value: 59.327 - type: recall_at_10 value: 65.302 - type: recall_at_20 value: 75.765 - type: recall_at_30 value: 80.632 - type: recall_at_50 value: 85.63499999999999 - type: recall_at_70 value: 88.554 - type: recall_at_100 value: 91.16300000000001 - type: recall_at_200 value: 94.85 - type: recall_at_300 value: 96.532 - type: recall_at_500 value: 97.751 - type: recall_at_700 value: 98.383 - type: recall_at_1000 value: 98.97 - type: precision_at_1 value: 24.398 - type: precision_at_2 value: 18.274 - type: precision_at_3 value: 14.951999999999998 - type: precision_at_5 value: 11.052 - type: precision_at_7 value: 8.84 - type: precision_at_10 value: 6.8309999999999995 - type: precision_at_20 value: 3.978 - type: precision_at_30 value: 2.827 - type: precision_at_50 value: 1.807 - type: precision_at_70 value: 1.336 - type: precision_at_100 value: 0.964 - type: precision_at_200 value: 0.502 - type: precision_at_300 value: 0.34099999999999997 - type: precision_at_500 value: 0.208 - type: precision_at_700 value: 0.15 - type: precision_at_1000 value: 0.105 - type: mrr_at_1 value: 24.398 - type: mrr_at_2 value: 30.351 - type: mrr_at_3 value: 33.001000000000005 - type: mrr_at_5 value: 35.228 - type: mrr_at_7 value: 36.223 - type: mrr_at_10 value: 36.903999999999996 - type: mrr_at_20 value: 37.631 - type: mrr_at_30 value: 37.830000000000005 - type: mrr_at_50 value: 37.955 - type: mrr_at_70 value: 38.003 - type: mrr_at_100 value: 38.033 - type: mrr_at_200 value: 38.059 - type: mrr_at_300 value: 38.066 - type: mrr_at_500 value: 38.068999999999996 - type: mrr_at_700 value: 38.07 - type: mrr_at_1000 value: 38.07 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.35658914728683 - type: f1 value: 96.15039630903114 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 86.29730962152303 - type: f1 value: 71.12166316567485 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 79.98991257565568 - type: f1 value: 77.41680115095276 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 82.1990585070612 - type: f1 value: 82.23719179179362 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 40.03019554933584 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 38.999760551497815 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.72383151953079 - type: mrr value: 33.93989699030721 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: ndcg_at_1 value: 51.858000000000004 - type: ndcg_at_2 value: 49.675999999999995 - type: ndcg_at_3 value: 47.519 - type: ndcg_at_5 value: 45.198 - type: ndcg_at_7 value: 43.504 - type: ndcg_at_10 value: 41.88 - type: ndcg_at_20 value: 39.122 - type: ndcg_at_30 value: 37.95 - type: ndcg_at_50 value: 37.602999999999994 - type: ndcg_at_70 value: 37.836 - type: ndcg_at_100 value: 38.493 - type: ndcg_at_200 value: 40.187 - type: ndcg_at_300 value: 41.524 - type: ndcg_at_500 value: 43.657000000000004 - type: ndcg_at_700 value: 45.234 - type: ndcg_at_1000 value: 47.047 - type: map_at_1 value: 6.392 - type: map_at_2 value: 10.113 - type: map_at_3 value: 11.543000000000001 - type: map_at_5 value: 13.729 - type: map_at_7 value: 14.985000000000001 - type: map_at_10 value: 16.217000000000002 - type: map_at_20 value: 18.106 - type: map_at_30 value: 18.878 - type: map_at_50 value: 19.822 - type: map_at_70 value: 20.352999999999998 - type: map_at_100 value: 20.827 - type: map_at_200 value: 21.512 - type: map_at_300 value: 21.826 - type: map_at_500 value: 22.155 - type: map_at_700 value: 22.349 - type: map_at_1000 value: 22.531000000000002 - type: recall_at_1 value: 6.392 - type: recall_at_2 value: 11.215 - type: recall_at_3 value: 13.231000000000002 - type: recall_at_5 value: 16.66 - type: recall_at_7 value: 18.802 - type: recall_at_10 value: 21.185000000000002 - type: recall_at_20 value: 25.35 - type: recall_at_30 value: 27.91 - type: recall_at_50 value: 32.845 - type: recall_at_70 value: 35.789 - type: recall_at_100 value: 39.247 - type: recall_at_200 value: 46.655 - type: recall_at_300 value: 51.43299999999999 - type: recall_at_500 value: 59.472 - type: recall_at_700 value: 64.742 - type: recall_at_1000 value: 70.97099999999999 - type: precision_at_1 value: 53.559999999999995 - type: precision_at_2 value: 48.762 - type: precision_at_3 value: 44.169000000000004 - type: precision_at_5 value: 39.071 - type: precision_at_7 value: 35.161 - type: precision_at_10 value: 31.238 - type: precision_at_20 value: 23.064999999999998 - type: precision_at_30 value: 18.844 - type: precision_at_50 value: 14.601 - type: precision_at_70 value: 12.088000000000001 - type: precision_at_100 value: 9.844999999999999 - type: precision_at_200 value: 6.358 - type: precision_at_300 value: 4.915 - type: precision_at_500 value: 3.531 - type: precision_at_700 value: 2.8649999999999998 - type: precision_at_1000 value: 2.289 - type: mrr_at_1 value: 54.17999999999999 - type: mrr_at_2 value: 59.288 - type: mrr_at_3 value: 60.836 - type: mrr_at_5 value: 62.275999999999996 - type: mrr_at_7 value: 62.688 - type: mrr_at_10 value: 62.865 - type: mrr_at_20 value: 63.11 - type: mrr_at_30 value: 63.193999999999996 - type: mrr_at_50 value: 63.258 - type: mrr_at_70 value: 63.278 - type: mrr_at_100 value: 63.297000000000004 - type: mrr_at_200 value: 63.315999999999995 - type: mrr_at_300 value: 63.318 - type: mrr_at_500 value: 63.32299999999999 - type: mrr_at_700 value: 63.324000000000005 - type: mrr_at_1000 value: 63.324999999999996 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: ndcg_at_1 value: 50.897999999999996 - type: ndcg_at_2 value: 59.126 - type: ndcg_at_3 value: 63.093999999999994 - type: ndcg_at_5 value: 67.197 - type: ndcg_at_7 value: 68.719 - type: ndcg_at_10 value: 69.915 - type: ndcg_at_20 value: 71.229 - type: ndcg_at_30 value: 71.667 - type: ndcg_at_50 value: 71.98 - type: ndcg_at_70 value: 72.127 - type: ndcg_at_100 value: 72.217 - type: ndcg_at_200 value: 72.319 - type: ndcg_at_300 value: 72.347 - type: ndcg_at_500 value: 72.37 - type: ndcg_at_700 value: 72.379 - type: ndcg_at_1000 value: 72.381 - type: map_at_1 value: 45.297 - type: map_at_2 value: 55.596000000000004 - type: map_at_3 value: 58.724 - type: map_at_5 value: 61.387 - type: map_at_7 value: 62.173 - type: map_at_10 value: 62.69 - type: map_at_20 value: 63.125 - type: map_at_30 value: 63.223 - type: map_at_50 value: 63.27700000000001 - type: map_at_70 value: 63.295 - type: map_at_100 value: 63.303 - type: map_at_200 value: 63.31 - type: map_at_300 value: 63.31099999999999 - type: map_at_500 value: 63.312000000000005 - type: map_at_700 value: 63.312000000000005 - type: map_at_1000 value: 63.312000000000005 - type: recall_at_1 value: 45.297 - type: recall_at_2 value: 63.866 - type: recall_at_3 value: 71.898 - type: recall_at_5 value: 81.16600000000001 - type: recall_at_7 value: 85.301 - type: recall_at_10 value: 88.94800000000001 - type: recall_at_20 value: 93.719 - type: recall_at_30 value: 95.628 - type: recall_at_50 value: 97.14699999999999 - type: recall_at_70 value: 97.955 - type: recall_at_100 value: 98.48599999999999 - type: recall_at_200 value: 99.157 - type: recall_at_300 value: 99.355 - type: recall_at_500 value: 99.53699999999999 - type: recall_at_700 value: 99.62299999999999 - type: recall_at_1000 value: 99.638 - type: precision_at_1 value: 50.897999999999996 - type: precision_at_2 value: 36.703 - type: precision_at_3 value: 27.926000000000002 - type: precision_at_5 value: 19.276 - type: precision_at_7 value: 14.533999999999999 - type: precision_at_10 value: 10.678 - type: precision_at_20 value: 5.663 - type: precision_at_30 value: 3.8600000000000003 - type: precision_at_50 value: 2.358 - type: precision_at_70 value: 1.7000000000000002 - type: precision_at_100 value: 1.198 - type: precision_at_200 value: 0.603 - type: precision_at_300 value: 0.40299999999999997 - type: precision_at_500 value: 0.242 - type: precision_at_700 value: 0.173 - type: precision_at_1000 value: 0.121 - type: mrr_at_1 value: 50.897999999999996 - type: mrr_at_2 value: 59.994 - type: mrr_at_3 value: 62.553000000000004 - type: mrr_at_5 value: 64.307 - type: mrr_at_7 value: 64.864 - type: mrr_at_10 value: 65.22200000000001 - type: mrr_at_20 value: 65.499 - type: mrr_at_30 value: 65.561 - type: mrr_at_50 value: 65.592 - type: mrr_at_70 value: 65.602 - type: mrr_at_100 value: 65.607 - type: mrr_at_200 value: 65.61099999999999 - type: mrr_at_300 value: 65.61200000000001 - type: mrr_at_500 value: 65.61200000000001 - type: mrr_at_700 value: 65.61200000000001 - type: mrr_at_1000 value: 65.61200000000001 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: ndcg_at_1 value: 82.96 - type: ndcg_at_2 value: 85.614 - type: ndcg_at_3 value: 87.19 - type: ndcg_at_5 value: 88.654 - type: ndcg_at_7 value: 89.287 - type: ndcg_at_10 value: 89.785 - type: ndcg_at_20 value: 90.384 - type: ndcg_at_30 value: 90.589 - type: ndcg_at_50 value: 90.738 - type: ndcg_at_70 value: 90.789 - type: ndcg_at_100 value: 90.824 - type: ndcg_at_200 value: 90.869 - type: ndcg_at_300 value: 90.881 - type: ndcg_at_500 value: 90.886 - type: ndcg_at_700 value: 90.889 - type: ndcg_at_1000 value: 90.889 - type: map_at_1 value: 72.152 - type: map_at_2 value: 80.818 - type: map_at_3 value: 83.462 - type: map_at_5 value: 85.286 - type: map_at_7 value: 85.921 - type: map_at_10 value: 86.334 - type: map_at_20 value: 86.737 - type: map_at_30 value: 86.847 - type: map_at_50 value: 86.911 - type: map_at_70 value: 86.932 - type: map_at_100 value: 86.943 - type: map_at_200 value: 86.953 - type: map_at_300 value: 86.955 - type: map_at_500 value: 86.956 - type: map_at_700 value: 86.956 - type: map_at_1000 value: 86.956 - type: recall_at_1 value: 72.152 - type: recall_at_2 value: 84.129 - type: recall_at_3 value: 88.87 - type: recall_at_5 value: 93.067 - type: recall_at_7 value: 94.882 - type: recall_at_10 value: 96.353 - type: recall_at_20 value: 98.26700000000001 - type: recall_at_30 value: 98.92999999999999 - type: recall_at_50 value: 99.441 - type: recall_at_70 value: 99.619 - type: recall_at_100 value: 99.748 - type: recall_at_200 value: 99.911 - type: recall_at_300 value: 99.956 - type: recall_at_500 value: 99.98 - type: recall_at_700 value: 99.991 - type: recall_at_1000 value: 99.996 - type: precision_at_1 value: 82.96 - type: precision_at_2 value: 52.175000000000004 - type: precision_at_3 value: 38.223 - type: precision_at_5 value: 25.056 - type: precision_at_7 value: 18.717 - type: precision_at_10 value: 13.614999999999998 - type: precision_at_20 value: 7.208 - type: precision_at_30 value: 4.928 - type: precision_at_50 value: 3.024 - type: precision_at_70 value: 2.183 - type: precision_at_100 value: 1.54 - type: precision_at_200 value: 0.779 - type: precision_at_300 value: 0.521 - type: precision_at_500 value: 0.313 - type: precision_at_700 value: 0.22399999999999998 - type: precision_at_1000 value: 0.157 - type: mrr_at_1 value: 82.96 - type: mrr_at_2 value: 87.005 - type: mrr_at_3 value: 88.07199999999999 - type: mrr_at_5 value: 88.634 - type: mrr_at_7 value: 88.793 - type: mrr_at_10 value: 88.87899999999999 - type: mrr_at_20 value: 88.94999999999999 - type: mrr_at_30 value: 88.96 - type: mrr_at_50 value: 88.965 - type: mrr_at_70 value: 88.966 - type: mrr_at_100 value: 88.967 - type: mrr_at_200 value: 88.967 - type: mrr_at_300 value: 88.967 - type: mrr_at_500 value: 88.967 - type: mrr_at_700 value: 88.967 - type: mrr_at_1000 value: 88.967 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 59.90388554491155 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 67.64232539036783 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: ndcg_at_1 value: 22.6 - type: ndcg_at_2 value: 20.355999999999998 - type: ndcg_at_3 value: 18.536 - type: ndcg_at_5 value: 16.523 - type: ndcg_at_7 value: 17.979 - type: ndcg_at_10 value: 19.908 - type: ndcg_at_20 value: 22.887 - type: ndcg_at_30 value: 24.43 - type: ndcg_at_50 value: 25.959 - type: ndcg_at_70 value: 26.989 - type: ndcg_at_100 value: 27.977 - type: ndcg_at_200 value: 29.831000000000003 - type: ndcg_at_300 value: 30.787 - type: ndcg_at_500 value: 31.974999999999998 - type: ndcg_at_700 value: 32.554 - type: ndcg_at_1000 value: 33.277 - type: map_at_1 value: 4.593 - type: map_at_2 value: 6.923 - type: map_at_3 value: 8.3 - type: map_at_5 value: 10.072000000000001 - type: map_at_7 value: 10.782 - type: map_at_10 value: 11.72 - type: map_at_20 value: 12.838 - type: map_at_30 value: 13.257 - type: map_at_50 value: 13.569 - type: map_at_70 value: 13.733 - type: map_at_100 value: 13.858999999999998 - type: map_at_200 value: 14.018 - type: map_at_300 value: 14.072999999999999 - type: map_at_500 value: 14.126 - type: map_at_700 value: 14.145 - type: map_at_1000 value: 14.161999999999999 - type: recall_at_1 value: 4.593 - type: recall_at_2 value: 7.997999999999999 - type: recall_at_3 value: 10.563 - type: recall_at_5 value: 14.907 - type: recall_at_7 value: 17.4 - type: recall_at_10 value: 21.18 - type: recall_at_20 value: 28.144999999999996 - type: recall_at_30 value: 32.462 - type: recall_at_50 value: 37.267 - type: recall_at_70 value: 40.875 - type: recall_at_100 value: 44.641999999999996 - type: recall_at_200 value: 52.573 - type: recall_at_300 value: 57.089999999999996 - type: recall_at_500 value: 63.14300000000001 - type: recall_at_700 value: 66.313 - type: recall_at_1000 value: 70.458 - type: precision_at_1 value: 22.6 - type: precision_at_2 value: 19.7 - type: precision_at_3 value: 17.333000000000002 - type: precision_at_5 value: 14.680000000000001 - type: precision_at_7 value: 12.243 - type: precision_at_10 value: 10.440000000000001 - type: precision_at_20 value: 6.944999999999999 - type: precision_at_30 value: 5.333 - type: precision_at_50 value: 3.678 - type: precision_at_70 value: 2.881 - type: precision_at_100 value: 2.2030000000000003 - type: precision_at_200 value: 1.295 - type: precision_at_300 value: 0.9369999999999999 - type: precision_at_500 value: 0.622 - type: precision_at_700 value: 0.466 - type: precision_at_1000 value: 0.347 - type: mrr_at_1 value: 22.6 - type: mrr_at_2 value: 27.900000000000002 - type: mrr_at_3 value: 30.067 - type: mrr_at_5 value: 32.207 - type: mrr_at_7 value: 33.004 - type: mrr_at_10 value: 33.596 - type: mrr_at_20 value: 34.268 - type: mrr_at_30 value: 34.492 - type: mrr_at_50 value: 34.628 - type: mrr_at_70 value: 34.681 - type: mrr_at_100 value: 34.717 - type: mrr_at_200 value: 34.757 - type: mrr_at_300 value: 34.768 - type: mrr_at_500 value: 34.772 - type: mrr_at_700 value: 34.774 - type: mrr_at_1000 value: 34.775 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 86.90122745229677 - type: cos_sim_spearman value: 82.92294737327579 - type: euclidean_pearson value: 84.08979655773187 - type: euclidean_spearman value: 82.92294657285412 - type: manhattan_pearson value: 84.09347480531832 - type: manhattan_spearman value: 82.91564613948087 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.01218713698583 - type: cos_sim_spearman value: 79.46865215168464 - type: euclidean_pearson value: 83.22621889891909 - type: euclidean_spearman value: 79.46853821709514 - type: manhattan_pearson value: 83.69962580788805 - type: manhattan_spearman value: 79.9561593356932 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.98438696342964 - type: cos_sim_spearman value: 89.15419511870839 - type: euclidean_pearson value: 88.49646141802894 - type: euclidean_spearman value: 89.15419503946019 - type: manhattan_pearson value: 88.6420585616327 - type: manhattan_spearman value: 89.42648950757743 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 87.30772547759544 - type: cos_sim_spearman value: 84.93199878424691 - type: euclidean_pearson value: 86.16266630395455 - type: euclidean_spearman value: 84.93198798543634 - type: manhattan_pearson value: 86.14285723189803 - type: manhattan_spearman value: 85.0361672522687 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 90.21342071197127 - type: cos_sim_spearman value: 90.7407512744838 - type: euclidean_pearson value: 90.1517933113061 - type: euclidean_spearman value: 90.74075125431919 - type: manhattan_pearson value: 90.17963034676193 - type: manhattan_spearman value: 90.88999275865135 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 86.82518054100498 - type: cos_sim_spearman value: 87.81570533154735 - type: euclidean_pearson value: 86.91684561573618 - type: euclidean_spearman value: 87.81570533154735 - type: manhattan_pearson value: 86.98311935744032 - type: manhattan_spearman value: 87.9594667151966 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 92.09578436612053 - type: cos_sim_spearman value: 92.01519349090438 - type: euclidean_pearson value: 92.07113635890894 - type: euclidean_spearman value: 92.01519349090438 - type: manhattan_pearson value: 91.89343820765625 - type: manhattan_spearman value: 91.7443476810177 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 69.29997751464549 - type: cos_sim_spearman value: 68.36425436812782 - type: euclidean_pearson value: 69.81381677661783 - type: euclidean_spearman value: 68.36425436812782 - type: manhattan_pearson value: 69.92823397008026 - type: manhattan_spearman value: 68.35770640039254 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 88.39126315452359 - type: cos_sim_spearman value: 88.99708463265337 - type: euclidean_pearson value: 88.60793820038607 - type: euclidean_spearman value: 88.99708463265337 - type: manhattan_pearson value: 88.69860633571047 - type: manhattan_spearman value: 89.20094593888012 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 86.58028062818582 - type: mrr value: 96.53586790841693 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: ndcg_at_1 value: 66.333 - type: ndcg_at_2 value: 70.655 - type: ndcg_at_3 value: 72.801 - type: ndcg_at_5 value: 75.793 - type: ndcg_at_7 value: 76.946 - type: ndcg_at_10 value: 77.66199999999999 - type: ndcg_at_20 value: 78.786 - type: ndcg_at_30 value: 79.066 - type: ndcg_at_50 value: 79.255 - type: ndcg_at_70 value: 79.423 - type: ndcg_at_100 value: 79.476 - type: ndcg_at_200 value: 79.65299999999999 - type: ndcg_at_300 value: 79.696 - type: ndcg_at_500 value: 79.73599999999999 - type: ndcg_at_700 value: 79.77199999999999 - type: ndcg_at_1000 value: 79.77199999999999 - type: map_at_1 value: 63.383 - type: map_at_2 value: 68.144 - type: map_at_3 value: 70.19800000000001 - type: map_at_5 value: 72.38 - type: map_at_7 value: 72.955 - type: map_at_10 value: 73.312 - type: map_at_20 value: 73.678 - type: map_at_30 value: 73.72800000000001 - type: map_at_50 value: 73.75500000000001 - type: map_at_70 value: 73.771 - type: map_at_100 value: 73.776 - type: map_at_200 value: 73.783 - type: map_at_300 value: 73.784 - type: map_at_500 value: 73.785 - type: map_at_700 value: 73.786 - type: map_at_1000 value: 73.786 - type: recall_at_1 value: 63.383 - type: recall_at_2 value: 72.283 - type: recall_at_3 value: 77.183 - type: recall_at_5 value: 84.56099999999999 - type: recall_at_7 value: 87.67200000000001 - type: recall_at_10 value: 89.822 - type: recall_at_20 value: 94 - type: recall_at_30 value: 95.333 - type: recall_at_50 value: 96.333 - type: recall_at_70 value: 97.333 - type: recall_at_100 value: 97.667 - type: recall_at_200 value: 99 - type: recall_at_300 value: 99.333 - type: recall_at_500 value: 99.667 - type: recall_at_700 value: 100 - type: recall_at_1000 value: 100 - type: precision_at_1 value: 66.333 - type: precision_at_2 value: 38.667 - type: precision_at_3 value: 28.111000000000004 - type: precision_at_5 value: 18.933 - type: precision_at_7 value: 14.094999999999999 - type: precision_at_10 value: 10.167 - type: precision_at_20 value: 5.35 - type: precision_at_30 value: 3.611 - type: precision_at_50 value: 2.1870000000000003 - type: precision_at_70 value: 1.576 - type: precision_at_100 value: 1.107 - type: precision_at_200 value: 0.5599999999999999 - type: precision_at_300 value: 0.374 - type: precision_at_500 value: 0.22499999999999998 - type: precision_at_700 value: 0.161 - type: precision_at_1000 value: 0.11299999999999999 - type: mrr_at_1 value: 66.333 - type: mrr_at_2 value: 70.833 - type: mrr_at_3 value: 72.167 - type: mrr_at_5 value: 73.6 - type: mrr_at_7 value: 74.084 - type: mrr_at_10 value: 74.283 - type: mrr_at_20 value: 74.54499999999999 - type: mrr_at_30 value: 74.59599999999999 - type: mrr_at_50 value: 74.622 - type: mrr_at_70 value: 74.639 - type: mrr_at_100 value: 74.643 - type: mrr_at_200 value: 74.65 - type: mrr_at_300 value: 74.652 - type: mrr_at_500 value: 74.653 - type: mrr_at_700 value: 74.653 - type: mrr_at_1000 value: 74.653 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.84554455445544 - type: cos_sim_ap value: 96.31178339136798 - type: cos_sim_f1 value: 92.1921921921922 - type: cos_sim_precision value: 92.28456913827655 - type: cos_sim_recall value: 92.10000000000001 - type: dot_accuracy value: 99.84554455445544 - type: dot_ap value: 96.31178339136797 - type: dot_f1 value: 92.1921921921922 - type: dot_precision value: 92.28456913827655 - type: dot_recall value: 92.10000000000001 - type: euclidean_accuracy value: 99.84554455445544 - type: euclidean_ap value: 96.31178339136798 - type: euclidean_f1 value: 92.1921921921922 - type: euclidean_precision value: 92.28456913827655 - type: euclidean_recall value: 92.10000000000001 - type: manhattan_accuracy value: 99.84752475247525 - type: manhattan_ap value: 96.4591954606088 - type: manhattan_f1 value: 92.25352112676056 - type: manhattan_precision value: 92.81376518218623 - type: manhattan_recall value: 91.7 - type: max_accuracy value: 99.84752475247525 - type: max_ap value: 96.4591954606088 - type: max_f1 value: 92.25352112676056 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 74.24659759283294 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 46.77690051260451 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.68436757803185 - type: mrr value: 56.82157711569475 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.652482405629843 - type: cos_sim_spearman value: 31.16341822347735 - type: dot_pearson value: 31.652479892699837 - type: dot_spearman value: 31.16341822347735 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: ndcg_at_1 value: 92 - type: ndcg_at_2 value: 90.839 - type: ndcg_at_3 value: 90.642 - type: ndcg_at_5 value: 90.348 - type: ndcg_at_7 value: 89.015 - type: ndcg_at_10 value: 87.599 - type: ndcg_at_20 value: 84.434 - type: ndcg_at_30 value: 81.655 - type: ndcg_at_50 value: 77.278 - type: ndcg_at_70 value: 73.957 - type: ndcg_at_100 value: 69.56 - type: ndcg_at_200 value: 60.724000000000004 - type: ndcg_at_300 value: 57.245000000000005 - type: ndcg_at_500 value: 56.316 - type: ndcg_at_700 value: 58.399 - type: ndcg_at_1000 value: 62.21600000000001 - type: map_at_1 value: 0.247 - type: map_at_2 value: 0.488 - type: map_at_3 value: 0.7230000000000001 - type: map_at_5 value: 1.204 - type: map_at_7 value: 1.6500000000000001 - type: map_at_10 value: 2.292 - type: map_at_20 value: 4.274 - type: map_at_30 value: 6.027 - type: map_at_50 value: 9.083 - type: map_at_70 value: 11.751000000000001 - type: map_at_100 value: 14.912 - type: map_at_200 value: 22.213 - type: map_at_300 value: 26.667999999999996 - type: map_at_500 value: 31.556 - type: map_at_700 value: 34.221000000000004 - type: map_at_1000 value: 36.443999999999996 - type: recall_at_1 value: 0.247 - type: recall_at_2 value: 0.49899999999999994 - type: recall_at_3 value: 0.742 - type: recall_at_5 value: 1.247 - type: recall_at_7 value: 1.722 - type: recall_at_10 value: 2.405 - type: recall_at_20 value: 4.583 - type: recall_at_30 value: 6.587999999999999 - type: recall_at_50 value: 10.188 - type: recall_at_70 value: 13.496 - type: recall_at_100 value: 17.578 - type: recall_at_200 value: 28.158 - type: recall_at_300 value: 35.532000000000004 - type: recall_at_500 value: 45.31 - type: recall_at_700 value: 51.822 - type: recall_at_1000 value: 58.53 - type: precision_at_1 value: 96 - type: precision_at_2 value: 96 - type: precision_at_3 value: 95.333 - type: precision_at_5 value: 94.8 - type: precision_at_7 value: 93.429 - type: precision_at_10 value: 91.4 - type: precision_at_20 value: 87.7 - type: precision_at_30 value: 84.867 - type: precision_at_50 value: 80.24 - type: precision_at_70 value: 76.371 - type: precision_at_100 value: 71.08 - type: precision_at_200 value: 59.4 - type: precision_at_300 value: 51.459999999999994 - type: precision_at_500 value: 40.644000000000005 - type: precision_at_700 value: 33.889 - type: precision_at_1000 value: 27.250000000000004 - type: mrr_at_1 value: 96 - type: mrr_at_2 value: 98 - type: mrr_at_3 value: 98 - type: mrr_at_5 value: 98 - type: mrr_at_7 value: 98 - type: mrr_at_10 value: 98 - type: mrr_at_20 value: 98 - type: mrr_at_30 value: 98 - type: mrr_at_50 value: 98 - type: mrr_at_70 value: 98 - type: mrr_at_100 value: 98 - type: mrr_at_200 value: 98 - type: mrr_at_300 value: 98 - type: mrr_at_500 value: 98 - type: mrr_at_700 value: 98 - type: mrr_at_1000 value: 98 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: ndcg_at_1 value: 43.878 - type: ndcg_at_2 value: 37.956 - type: ndcg_at_3 value: 35.053 - type: ndcg_at_5 value: 32.59 - type: ndcg_at_7 value: 30.226 - type: ndcg_at_10 value: 29.005 - type: ndcg_at_20 value: 30.11 - type: ndcg_at_30 value: 32.019999999999996 - type: ndcg_at_50 value: 34.354 - type: ndcg_at_70 value: 36.665 - type: ndcg_at_100 value: 38.888 - type: ndcg_at_200 value: 43.435 - type: ndcg_at_300 value: 45.795 - type: ndcg_at_500 value: 48.699999999999996 - type: ndcg_at_700 value: 50.242 - type: ndcg_at_1000 value: 51.529 - type: map_at_1 value: 3.521 - type: map_at_2 value: 5.309 - type: map_at_3 value: 6.576 - type: map_at_5 value: 8.97 - type: map_at_7 value: 10.194 - type: map_at_10 value: 11.949 - type: map_at_20 value: 14.686 - type: map_at_30 value: 15.8 - type: map_at_50 value: 16.59 - type: map_at_70 value: 17.2 - type: map_at_100 value: 17.765 - type: map_at_200 value: 18.636 - type: map_at_300 value: 18.972 - type: map_at_500 value: 19.301 - type: map_at_700 value: 19.445 - type: map_at_1000 value: 19.546 - type: recall_at_1 value: 3.521 - type: recall_at_2 value: 5.848 - type: recall_at_3 value: 7.657 - type: recall_at_5 value: 11.368 - type: recall_at_7 value: 13.748 - type: recall_at_10 value: 18.061 - type: recall_at_20 value: 26.844 - type: recall_at_30 value: 31.186000000000003 - type: recall_at_50 value: 35.951 - type: recall_at_70 value: 40.961999999999996 - type: recall_at_100 value: 46.743 - type: recall_at_200 value: 58.483 - type: recall_at_300 value: 65.973 - type: recall_at_500 value: 75.233 - type: recall_at_700 value: 80.472 - type: recall_at_1000 value: 85.02 - type: precision_at_1 value: 46.939 - type: precision_at_2 value: 38.775999999999996 - type: precision_at_3 value: 34.694 - type: precision_at_5 value: 31.429000000000002 - type: precision_at_7 value: 27.697 - type: precision_at_10 value: 24.490000000000002 - type: precision_at_20 value: 18.776 - type: precision_at_30 value: 15.034 - type: precision_at_50 value: 10.857 - type: precision_at_70 value: 9.096 - type: precision_at_100 value: 7.51 - type: precision_at_200 value: 4.929 - type: precision_at_300 value: 3.7760000000000002 - type: precision_at_500 value: 2.6780000000000004 - type: precision_at_700 value: 2.085 - type: precision_at_1000 value: 1.5709999999999997 - type: mrr_at_1 value: 46.939 - type: mrr_at_2 value: 55.102 - type: mrr_at_3 value: 57.823 - type: mrr_at_5 value: 60.68 - type: mrr_at_7 value: 60.972 - type: mrr_at_10 value: 61.199000000000005 - type: mrr_at_20 value: 61.831 - type: mrr_at_30 value: 61.831 - type: mrr_at_50 value: 61.873 - type: mrr_at_70 value: 61.873 - type: mrr_at_100 value: 61.873 - type: mrr_at_200 value: 61.873 - type: mrr_at_300 value: 61.873 - type: mrr_at_500 value: 61.873 - type: mrr_at_700 value: 61.873 - type: mrr_at_1000 value: 61.873 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.3294 - type: ap value: 14.561333393364736 - type: f1 value: 53.992309820496466 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 63.63893604980192 - type: f1 value: 63.92959380489434 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 56.270879258659775 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.71073493473207 - type: cos_sim_ap value: 81.52392540284202 - type: cos_sim_f1 value: 74.71162377994676 - type: cos_sim_precision value: 71.89558428885094 - type: cos_sim_recall value: 77.75725593667546 - type: dot_accuracy value: 88.71073493473207 - type: dot_ap value: 81.52394754041109 - type: dot_f1 value: 74.71162377994676 - type: dot_precision value: 71.89558428885094 - type: dot_recall value: 77.75725593667546 - type: euclidean_accuracy value: 88.71073493473207 - type: euclidean_ap value: 81.52392035435321 - type: euclidean_f1 value: 74.71162377994676 - type: euclidean_precision value: 71.89558428885094 - type: euclidean_recall value: 77.75725593667546 - type: manhattan_accuracy value: 88.47231328604637 - type: manhattan_ap value: 81.22907439267321 - type: manhattan_f1 value: 74.3351571446749 - type: manhattan_precision value: 71.78667977390022 - type: manhattan_recall value: 77.0712401055409 - type: max_accuracy value: 88.71073493473207 - type: max_ap value: 81.52394754041109 - type: max_f1 value: 74.71162377994676 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.85136026700819 - type: cos_sim_ap value: 87.7768002924216 - type: cos_sim_f1 value: 80.358908624794 - type: cos_sim_precision value: 76.62918209122023 - type: cos_sim_recall value: 84.47028025870034 - type: dot_accuracy value: 89.85136026700819 - type: dot_ap value: 87.77680027889778 - type: dot_f1 value: 80.358908624794 - type: dot_precision value: 76.62918209122023 - type: dot_recall value: 84.47028025870034 - type: euclidean_accuracy value: 89.85136026700819 - type: euclidean_ap value: 87.77680174697751 - type: euclidean_f1 value: 80.358908624794 - type: euclidean_precision value: 76.62918209122023 - type: euclidean_recall value: 84.47028025870034 - type: manhattan_accuracy value: 89.86300306593705 - type: manhattan_ap value: 87.78613271895861 - type: manhattan_f1 value: 80.31831016905645 - type: manhattan_precision value: 76.68230516070304 - type: manhattan_recall value: 84.3162919618109 - type: max_accuracy value: 89.86300306593705 - type: max_ap value: 87.78613271895861 - type: max_f1 value: 80.358908624794 --- <h1 align="center">Salesforce/SFR-Embedding-Mistral</h1> **SFR-Embedding by Salesforce Research.** The model is trained on top of [E5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) and [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses. Please refer to specific papers for more details: - [MTEB benchmark](https://arxiv.org/abs/2210.07316) - [Mistral](https://arxiv.org/abs/2310.06825) - [E5-mistral-7b-instruct](https://arxiv.org/pdf/2401.00368.pdf) More technical details will be updated later. ## How to run ### Transformers The models can be used as follows: ```python import torch import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0]) if left_padding: return last_hidden_states[:, -1] else: sequence_lengths = attention_mask.sum(dim=1) - 1 batch_size = last_hidden_states.shape[0] return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'How to bake a chocolate cake'), get_detailed_instruct(task, 'Symptoms of the flu') ] # No need to add instruction for retrieval documents passages = [ "To bake a delicious chocolate cake, you'll need the following ingredients: all-purpose flour, sugar, cocoa powder, baking powder, baking soda, salt, eggs, milk, vegetable oil, and vanilla extract. Start by preheating your oven to 350°F (175°C). In a mixing bowl, combine the dry ingredients (flour, sugar, cocoa powder, baking powder, baking soda, and salt). In a separate bowl, whisk together the wet ingredients (eggs, milk, vegetable oil, and vanilla extract). Gradually add the wet mixture to the dry ingredients, stirring until well combined. Pour the batter into a greased cake pan and bake for 30-35 minutes. Let it cool before frosting with your favorite chocolate frosting. Enjoy your homemade chocolate cake!", "The flu, or influenza, is an illness caused by influenza viruses. Common symptoms of the flu include a high fever, chills, cough, sore throat, runny or stuffy nose, body aches, headache, fatigue, and sometimes nausea and vomiting. These symptoms can come on suddenly and are usually more severe than the common cold. It's important to get plenty of rest, stay hydrated, and consult a healthcare professional if you suspect you have the flu. In some cases, antiviral medications can help alleviate symptoms and reduce the duration of the illness." ] # load model and tokenizer tokenizer = AutoTokenizer.from_pretrained('Salesforce/SFR-Embedding-Mistral') model = AutoModel.from_pretrained('Salesforce/SFR-Embedding-Mistral') # get the embeddings max_length = 4096 input_texts = queries + passages batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors="pt") outputs = model(**batch_dict) embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # [[86.7153549194336, 36.64569091796875], [35.00493621826172, 82.0738525390625]] ``` ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer, util model = SentenceTransformer("Salesforce/SFR-Embedding-Mistral") def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'How to bake a chocolate cake'), get_detailed_instruct(task, 'Symptoms of the flu') ] # No need to add instruction for retrieval documents passages = [ "To bake a delicious chocolate cake, you'll need the following ingredients: all-purpose flour, sugar, cocoa powder, baking powder, baking soda, salt, eggs, milk, vegetable oil, and vanilla extract. Start by preheating your oven to 350°F (175°C). In a mixing bowl, combine the dry ingredients (flour, sugar, cocoa powder, baking powder, baking soda, and salt). In a separate bowl, whisk together the wet ingredients (eggs, milk, vegetable oil, and vanilla extract). Gradually add the wet mixture to the dry ingredients, stirring until well combined. Pour the batter into a greased cake pan and bake for 30-35 minutes. Let it cool before frosting with your favorite chocolate frosting. Enjoy your homemade chocolate cake!", "The flu, or influenza, is an illness caused by influenza viruses. Common symptoms of the flu include a high fever, chills, cough, sore throat, runny or stuffy nose, body aches, headache, fatigue, and sometimes nausea and vomiting. These symptoms can come on suddenly and are usually more severe than the common cold. It's important to get plenty of rest, stay hydrated, and consult a healthcare professional if you suspect you have the flu. In some cases, antiviral medications can help alleviate symptoms and reduce the duration of the illness." ] embeddings = model.encode(queries + passages) scores = util.cos_sim(embeddings[:2], embeddings[2:]) * 100 print(scores.tolist()) # [[86.71537780761719, 36.645721435546875], [35.00497055053711, 82.07388305664062]] ``` ### MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB](https://arxiv.org/abs/2210.07316) benchmark. SFR-Embedding Team (∗indicates lead contributors). * Rui Meng* * Ye Liu* * Shafiq Rayhan Joty * Caiming Xiong * Yingbo Zhou * Semih Yavuz ### Citation ```bibtex @misc{SFRAIResearch2024, title={SFR-Embedding-Mistral:Enhance Text Retrieval with Transfer Learning}, author={Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, Semih Yavuz}, howpublished={Salesforce AI Research Blog}, year={2024}, url={https://blog.salesforceairesearch.com/sfr-embedded-mistral/} } ```
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
<h1 align="center">Salesforce/SFR-Embedding-Mistral</h1> **SFR-Embedding by Salesforce Research.** The model is trained on top of [E5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) and [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses. Please refer to specific papers for more details: - [MTEB benchmark](https://arxiv.org/abs/2210.07316) - [Mistral](https://arxiv.org/abs/2310.06825) - [E5-mistral-7b-instruct](https://arxiv.org/pdf/2401.00368.pdf) More technical details will be updated later. ## How to run ### Transformers The models can be used as follows: ```python import torch import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0]) if left_padding: return last_hidden_states[:, -1] else: sequence_lengths = attention_mask.sum(dim=1) - 1 batch_size = last_hidden_states.shape[0] return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'How to bake a chocolate cake'), get_detailed_instruct(task, 'Symptoms of the flu') ] # No need to add instruction for retrieval documents passages = [ "To bake a delicious chocolate cake, you'll need the following ingredients: all-purpose flour, sugar, cocoa powder, baking powder, baking soda, salt, eggs, milk, vegetable oil, and vanilla extract. Start by preheating your oven to 350°F (175°C). In a mixing bowl, combine the dry ingredients (flour, sugar, cocoa powder, baking powder, baking soda, and salt). In a separate bowl, whisk together the wet ingredients (eggs, milk, vegetable oil, and vanilla extract). Gradually add the wet mixture to the dry ingredients, stirring until well combined. Pour the batter into a greased cake pan and bake for 30-35 minutes. Let it cool before frosting with your favorite chocolate frosting. Enjoy your homemade chocolate cake!", "The flu, or influenza, is an illness caused by influenza viruses. Common symptoms of the flu include a high fever, chills, cough, sore throat, runny or stuffy nose, body aches, headache, fatigue, and sometimes nausea and vomiting. These symptoms can come on suddenly and are usually more severe than the common cold. It's important to get plenty of rest, stay hydrated, and consult a healthcare professional if you suspect you have the flu. In some cases, antiviral medications can help alleviate symptoms and reduce the duration of the illness." ] # load model and tokenizer tokenizer = AutoTokenizer.from_pretrained('Salesforce/SFR-Embedding-Mistral') model = AutoModel.from_pretrained('Salesforce/SFR-Embedding-Mistral') # get the embeddings max_length = 4096 input_texts = queries + passages batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors="pt") outputs = model(**batch_dict) embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # [[86.7153549194336, 36.64569091796875], [35.00493621826172, 82.0738525390625]] ``` ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer, util model = SentenceTransformer("Salesforce/SFR-Embedding-Mistral") def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'How to bake a chocolate cake'), get_detailed_instruct(task, 'Symptoms of the flu') ] # No need to add instruction for retrieval documents passages = [ "To bake a delicious chocolate cake, you'll need the following ingredients: all-purpose flour, sugar, cocoa powder, baking powder, baking soda, salt, eggs, milk, vegetable oil, and vanilla extract. Start by preheating your oven to 350°F (175°C). In a mixing bowl, combine the dry ingredients (flour, sugar, cocoa powder, baking powder, baking soda, and salt). In a separate bowl, whisk together the wet ingredients (eggs, milk, vegetable oil, and vanilla extract). Gradually add the wet mixture to the dry ingredients, stirring until well combined. Pour the batter into a greased cake pan and bake for 30-35 minutes. Let it cool before frosting with your favorite chocolate frosting. Enjoy your homemade chocolate cake!", "The flu, or influenza, is an illness caused by influenza viruses. Common symptoms of the flu include a high fever, chills, cough, sore throat, runny or stuffy nose, body aches, headache, fatigue, and sometimes nausea and vomiting. These symptoms can come on suddenly and are usually more severe than the common cold. It's important to get plenty of rest, stay hydrated, and consult a healthcare professional if you suspect you have the flu. In some cases, antiviral medications can help alleviate symptoms and reduce the duration of the illness." ] embeddings = model.encode(queries + passages) scores = util.cos_sim(embeddings[:2], embeddings[2:]) * 100 print(scores.tolist()) # [[86.71537780761719, 36.645721435546875], [35.00497055053711, 82.07388305664062]] ``` ### MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB](https://arxiv.org/abs/2210.07316) benchmark. SFR-Embedding Team (∗indicates lead contributors). * Rui Meng* * Ye Liu* * Shafiq Rayhan Joty * Caiming Xiong * Yingbo Zhou * Semih Yavuz ### Citation ```bibtex @misc{SFRAIResearch2024, title={SFR-Embedding-Mistral:Enhance Text Retrieval with Transfer Learning}, author={Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, Semih Yavuz}, howpublished={Salesforce AI Research Blog}, year={2024}, url={https://blog.salesforceairesearch.com/sfr-embedded-mistral/} } ```
{"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["mteb", "sentence-transformers", "transformers"], "model-index": [{"name": "SFR-Embedding-Mistral", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 77.92537313432834}, {"type": "ap", "value": 40.86767661556651}, {"type": "f1", "value": 71.65758897929837}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 95.967}, {"type": "ap", "value": 94.46300829592593}, {"type": "f1", "value": 95.96507173189292}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 54.352000000000004}, {"type": "f1", "value": 53.636682615380174}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 43.314}, {"type": "ndcg_at_2", "value": 54.757}, {"type": "ndcg_at_3", "value": 58.84700000000001}, {"type": "ndcg_at_5", "value": 63.634}, {"type": "ndcg_at_7", "value": 65.741}, {"type": "ndcg_at_10", "value": 67.171}, {"type": "ndcg_at_20", "value": 68.585}, {"type": "ndcg_at_30", "value": 68.81}, {"type": "ndcg_at_50", "value": 68.932}, {"type": "ndcg_at_70", "value": 68.992}, {"type": "ndcg_at_100", "value": 69.014}, {"type": "ndcg_at_200", "value": 69.014}, {"type": "ndcg_at_300", "value": 69.014}, {"type": "ndcg_at_500", "value": 69.014}, {"type": "ndcg_at_700", "value": 69.014}, {"type": "ndcg_at_1000", "value": 69.014}, {"type": "map_at_1", "value": 43.314}, {"type": "map_at_2", "value": 52.383}, {"type": "map_at_3", "value": 55.108999999999995}, {"type": "map_at_5", "value": 57.772999999999996}, {"type": "map_at_7", "value": 58.718}, {"type": "map_at_10", "value": 59.256}, {"type": "map_at_20", "value": 59.668}, {"type": "map_at_30", "value": 59.709999999999994}, {"type": "map_at_50", "value": 59.727}, {"type": "map_at_70", "value": 59.733999999999995}, {"type": "map_at_100", "value": 59.73500000000001}, {"type": "map_at_200", "value": 59.73500000000001}, {"type": "map_at_300", "value": 59.73500000000001}, {"type": "map_at_500", "value": 59.73500000000001}, {"type": "map_at_700", "value": 59.73500000000001}, {"type": "map_at_1000", "value": 59.73500000000001}, {"type": "recall_at_1", "value": 43.314}, {"type": "recall_at_2", "value": 61.451}, {"type": "recall_at_3", "value": 69.63000000000001}, {"type": "recall_at_5", "value": 81.223}, {"type": "recall_at_7", "value": 87.33999999999999}, {"type": "recall_at_10", "value": 92.034}, {"type": "recall_at_20", "value": 97.44}, {"type": "recall_at_30", "value": 98.506}, {"type": "recall_at_50", "value": 99.14699999999999}, {"type": "recall_at_70", "value": 99.502}, {"type": "recall_at_100", "value": 99.644}, {"type": "recall_at_200", "value": 99.644}, {"type": "recall_at_300", "value": 99.644}, {"type": "recall_at_500", "value": 99.644}, {"type": "recall_at_700", "value": 99.644}, {"type": "recall_at_1000", "value": 99.644}, {"type": "precision_at_1", "value": 43.314}, {"type": "precision_at_2", "value": 30.725}, {"type": "precision_at_3", "value": 23.21}, {"type": "precision_at_5", "value": 16.245}, {"type": "precision_at_7", "value": 12.477}, {"type": "precision_at_10", "value": 9.203}, {"type": "precision_at_20", "value": 4.872}, {"type": "precision_at_30", "value": 3.2840000000000003}, {"type": "precision_at_50", "value": 1.983}, {"type": "precision_at_70", "value": 1.421}, {"type": "precision_at_100", "value": 0.996}, {"type": "precision_at_200", "value": 0.498}, {"type": "precision_at_300", "value": 0.332}, {"type": "precision_at_500", "value": 0.199}, {"type": "precision_at_700", "value": 0.14200000000000002}, {"type": "precision_at_1000", "value": 0.1}, {"type": "mrr_at_1", "value": 44.666}, {"type": "mrr_at_2", "value": 52.418}, {"type": "mrr_at_3", "value": 55.595000000000006}, {"type": "mrr_at_5", "value": 58.205}, {"type": "mrr_at_7", "value": 59.202999999999996}, {"type": "mrr_at_10", "value": 59.727}, {"type": "mrr_at_20", "value": 60.133}, {"type": "mrr_at_30", "value": 60.178}, {"type": "mrr_at_50", "value": 60.192}, {"type": "mrr_at_70", "value": 60.19799999999999}, {"type": "mrr_at_100", "value": 60.199999999999996}, {"type": "mrr_at_200", "value": 60.199999999999996}, {"type": "mrr_at_300", "value": 60.199999999999996}, {"type": "mrr_at_500", "value": 60.199999999999996}, {"type": "mrr_at_700", "value": 60.199999999999996}, {"type": "mrr_at_1000", "value": 60.199999999999996}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 52.07508593014336}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 47.381339333240675}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 67.58376647859171}, {"type": "mrr", "value": 80.56885635140483}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.40107280274783}, {"type": "cos_sim_spearman", "value": 86.07003345325681}, {"type": "euclidean_pearson", "value": 87.1726034325395}, {"type": "euclidean_spearman", "value": 86.07003345325681}, {"type": "manhattan_pearson", "value": 87.25660625029772}, {"type": "manhattan_spearman", "value": 86.3808839096893}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 88.81168831168831}, {"type": "f1", "value": 88.76514496560141}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 43.9382520874344}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 41.14351847240913}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 34.51166666666667}, {"type": "ndcg_at_2", "value": 38.51591666666667}, {"type": "ndcg_at_3", "value": 40.95083333333333}, {"type": "ndcg_at_5", "value": 43.580666666666666}, {"type": "ndcg_at_7", "value": 45.0625}, {"type": "ndcg_at_10", "value": 46.49083333333333}, {"type": "ndcg_at_20", "value": 48.731333333333325}, {"type": "ndcg_at_30", "value": 49.78666666666667}, {"type": "ndcg_at_50", "value": 50.84049999999999}, {"type": "ndcg_at_70", "value": 51.393750000000004}, {"type": "ndcg_at_100", "value": 51.883333333333326}, {"type": "ndcg_at_200", "value": 52.65225}, {"type": "ndcg_at_300", "value": 52.98241666666669}, {"type": "ndcg_at_500", "value": 53.28541666666668}, {"type": "ndcg_at_700", "value": 53.49241666666668}, {"type": "ndcg_at_1000", "value": 53.63758333333334}, {"type": "map_at_1", "value": 29.10075}, {"type": "map_at_2", "value": 34.636500000000005}, {"type": "map_at_3", "value": 36.92033333333333}, {"type": "map_at_5", "value": 38.81641666666666}, {"type": "map_at_7", "value": 39.635416666666664}, {"type": "map_at_10", "value": 40.294583333333335}, {"type": "map_at_20", "value": 41.07574999999999}, {"type": "map_at_30", "value": 41.333}, {"type": "map_at_50", "value": 41.529333333333334}, {"type": "map_at_70", "value": 41.606833333333334}, {"type": "map_at_100", "value": 41.66224999999999}, {"type": "map_at_200", "value": 41.72691666666666}, {"type": "map_at_300", "value": 41.746583333333334}, {"type": "map_at_500", "value": 41.75983333333333}, {"type": "map_at_700", "value": 41.76558333333333}, {"type": "map_at_1000", "value": 41.769000000000005}, {"type": "recall_at_1", "value": 29.10075}, {"type": "recall_at_2", "value": 39.07658333333333}, {"type": "recall_at_3", "value": 44.93591666666667}, {"type": "recall_at_5", "value": 51.66883333333333}, {"type": "recall_at_7", "value": 55.881000000000014}, {"type": "recall_at_10", "value": 60.34691666666667}, {"type": "recall_at_20", "value": 68.44016666666667}, {"type": "recall_at_30", "value": 72.90766666666667}, {"type": "recall_at_50", "value": 77.843}, {"type": "recall_at_70", "value": 80.70366666666668}, {"type": "recall_at_100", "value": 83.42866666666667}, {"type": "recall_at_200", "value": 88.06816666666668}, {"type": "recall_at_300", "value": 90.249}, {"type": "recall_at_500", "value": 92.37616666666668}, {"type": "recall_at_700", "value": 93.978}, {"type": "recall_at_1000", "value": 95.12791666666666}, {"type": "precision_at_1", "value": 34.51166666666667}, {"type": "precision_at_2", "value": 24.326333333333327}, {"type": "precision_at_3", "value": 19.099249999999998}, {"type": "precision_at_5", "value": 13.672666666666666}, {"type": "precision_at_7", "value": 10.772}, {"type": "precision_at_10", "value": 8.302166666666668}, {"type": "precision_at_20", "value": 4.8960833333333325}, {"type": "precision_at_30", "value": 3.551083333333333}, {"type": "precision_at_50", "value": 2.3386666666666662}, {"type": "precision_at_70", "value": 1.7605833333333334}, {"type": "precision_at_100", "value": 1.2965}, {"type": "precision_at_200", "value": 0.7106666666666668}, {"type": "precision_at_300", "value": 0.4955}, {"type": "precision_at_500", "value": 0.3106666666666667}, {"type": "precision_at_700", "value": 0.22791666666666668}, {"type": "precision_at_1000", "value": 0.1635833333333333}, {"type": "mrr_at_1", "value": 34.51166666666667}, {"type": "mrr_at_2", "value": 39.954249999999995}, {"type": "mrr_at_3", "value": 41.93741666666668}, {"type": "mrr_at_5", "value": 43.487166666666674}, {"type": "mrr_at_7", "value": 44.14983333333333}, {"type": "mrr_at_10", "value": 44.62766666666666}, {"type": "mrr_at_20", "value": 45.15291666666668}, {"type": "mrr_at_30", "value": 45.317}, {"type": "mrr_at_50", "value": 45.42875}, {"type": "mrr_at_70", "value": 45.46966666666667}, {"type": "mrr_at_100", "value": 45.49716666666667}, {"type": "mrr_at_200", "value": 45.525166666666664}, {"type": "mrr_at_300", "value": 45.53233333333335}, {"type": "mrr_at_500", "value": 45.5365}, {"type": "mrr_at_700", "value": 45.538583333333335}, {"type": "mrr_at_1000", "value": 45.539583333333326}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 35.179}, {"type": "ndcg_at_2", "value": 31.243}, {"type": "ndcg_at_3", "value": 30.562}, {"type": "ndcg_at_5", "value": 32.409}, {"type": "ndcg_at_7", "value": 34.525}, {"type": "ndcg_at_10", "value": 36.415}, {"type": "ndcg_at_20", "value": 39.443}, {"type": "ndcg_at_30", "value": 40.796}, {"type": "ndcg_at_50", "value": 42.16}, {"type": "ndcg_at_70", "value": 42.971}, {"type": "ndcg_at_100", "value": 43.691}, {"type": "ndcg_at_200", "value": 45.004}, {"type": "ndcg_at_300", "value": 45.527}, {"type": "ndcg_at_500", "value": 46.072}, {"type": "ndcg_at_700", "value": 46.387}, {"type": "ndcg_at_1000", "value": 46.663}, {"type": "map_at_1", "value": 15.692}, {"type": "map_at_2", "value": 20.116}, {"type": "map_at_3", "value": 22.6}, {"type": "map_at_5", "value": 24.701}, {"type": "map_at_7", "value": 25.934}, {"type": "map_at_10", "value": 26.843}, {"type": "map_at_20", "value": 27.975}, {"type": "map_at_30", "value": 28.372000000000003}, {"type": "map_at_50", "value": 28.671000000000003}, {"type": "map_at_70", "value": 28.803}, {"type": "map_at_100", "value": 28.895}, {"type": "map_at_200", "value": 29.011}, {"type": "map_at_300", "value": 29.042}, {"type": "map_at_500", "value": 29.065}, {"type": "map_at_700", "value": 29.075}, {"type": "map_at_1000", "value": 29.081000000000003}, {"type": "recall_at_1", "value": 15.692}, {"type": "recall_at_2", "value": 22.602}, {"type": "recall_at_3", "value": 27.814}, {"type": "recall_at_5", "value": 33.756}, {"type": "recall_at_7", "value": 38.073}, {"type": "recall_at_10", "value": 42.553000000000004}, {"type": "recall_at_20", "value": 51.121}, {"type": "recall_at_30", "value": 55.523999999999994}, {"type": "recall_at_50", "value": 60.586}, {"type": "recall_at_70", "value": 63.94}, {"type": "recall_at_100", "value": 67.134}, {"type": "recall_at_200", "value": 73.543}, {"type": "recall_at_300", "value": 76.372}, {"type": "recall_at_500", "value": 79.60199999999999}, {"type": "recall_at_700", "value": 81.536}, {"type": "recall_at_1000", "value": 83.37400000000001}, {"type": "precision_at_1", "value": 35.179}, {"type": "precision_at_2", "value": 27.199}, {"type": "precision_at_3", "value": 22.953000000000003}, {"type": "precision_at_5", "value": 17.224999999999998}, {"type": "precision_at_7", "value": 14.238999999999999}, {"type": "precision_at_10", "value": 11.303}, {"type": "precision_at_20", "value": 6.954000000000001}, {"type": "precision_at_30", "value": 5.116}, {"type": "precision_at_50", "value": 3.395}, {"type": "precision_at_70", "value": 2.579}, {"type": "precision_at_100", "value": 1.9109999999999998}, {"type": "precision_at_200", "value": 1.065}, {"type": "precision_at_300", "value": 0.743}, {"type": "precision_at_500", "value": 0.46699999999999997}, {"type": "precision_at_700", "value": 0.344}, {"type": "precision_at_1000", "value": 0.247}, {"type": "mrr_at_1", "value": 35.179}, {"type": "mrr_at_2", "value": 41.792}, {"type": "mrr_at_3", "value": 44.484}, {"type": "mrr_at_5", "value": 46.39}, {"type": "mrr_at_7", "value": 47.125}, {"type": "mrr_at_10", "value": 47.711999999999996}, {"type": "mrr_at_20", "value": 48.214}, {"type": "mrr_at_30", "value": 48.325}, {"type": "mrr_at_50", "value": 48.392}, {"type": "mrr_at_70", "value": 48.418}, {"type": "mrr_at_100", "value": 48.44}, {"type": "mrr_at_200", "value": 48.46}, {"type": "mrr_at_300", "value": 48.461999999999996}, {"type": "mrr_at_500", "value": 48.466}, {"type": "mrr_at_700", "value": 48.466}, {"type": "mrr_at_1000", "value": 48.467}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 62.375}, {"type": "ndcg_at_2", "value": 56.286}, {"type": "ndcg_at_3", "value": 53.665}, {"type": "ndcg_at_5", "value": 51.139}, {"type": "ndcg_at_7", "value": 49.873}, {"type": "ndcg_at_10", "value": 49.056}, {"type": "ndcg_at_20", "value": 48.783}, {"type": "ndcg_at_30", "value": 49.166}, {"type": "ndcg_at_50", "value": 51.141999999999996}, {"type": "ndcg_at_70", "value": 52.774}, {"type": "ndcg_at_100", "value": 54.403}, {"type": "ndcg_at_200", "value": 57.419}, {"type": "ndcg_at_300", "value": 58.778}, {"type": "ndcg_at_500", "value": 60.228}, {"type": "ndcg_at_700", "value": 61.07599999999999}, {"type": "ndcg_at_1000", "value": 61.846000000000004}, {"type": "map_at_1", "value": 10.359}, {"type": "map_at_2", "value": 14.446}, {"type": "map_at_3", "value": 16.689}, {"type": "map_at_5", "value": 20.096}, {"type": "map_at_7", "value": 22.247}, {"type": "map_at_10", "value": 24.468999999999998}, {"type": "map_at_20", "value": 28.938000000000002}, {"type": "map_at_30", "value": 31.134}, {"type": "map_at_50", "value": 33.403}, {"type": "map_at_70", "value": 34.486}, {"type": "map_at_100", "value": 35.337}, {"type": "map_at_200", "value": 36.364999999999995}, {"type": "map_at_300", "value": 36.735}, {"type": "map_at_500", "value": 37.057}, {"type": "map_at_700", "value": 37.225}, {"type": "map_at_1000", "value": 37.379}, {"type": "recall_at_1", "value": 10.359}, {"type": "recall_at_2", "value": 14.945}, {"type": "recall_at_3", "value": 17.694}, {"type": "recall_at_5", "value": 22.677}, {"type": "recall_at_7", "value": 26.131}, {"type": "recall_at_10", "value": 30.053}, {"type": "recall_at_20", "value": 39.518}, {"type": "recall_at_30", "value": 44.925}, {"type": "recall_at_50", "value": 52.154}, {"type": "recall_at_70", "value": 56.729}, {"type": "recall_at_100", "value": 61.18900000000001}, {"type": "recall_at_200", "value": 70.407}, {"type": "recall_at_300", "value": 74.412}, {"type": "recall_at_500", "value": 78.891}, {"type": "recall_at_700", "value": 81.74}, {"type": "recall_at_1000", "value": 84.253}, {"type": "precision_at_1", "value": 75}, {"type": "precision_at_2", "value": 64.125}, {"type": "precision_at_3", "value": 57.833}, {"type": "precision_at_5", "value": 50.24999999999999}, {"type": "precision_at_7", "value": 44.75}, {"type": "precision_at_10", "value": 39.75}, {"type": "precision_at_20", "value": 30.412}, {"type": "precision_at_30", "value": 25.141999999999996}, {"type": "precision_at_50", "value": 19.2}, {"type": "precision_at_70", "value": 15.729000000000001}, {"type": "precision_at_100", "value": 12.552}, {"type": "precision_at_200", "value": 7.866}, {"type": "precision_at_300", "value": 5.9270000000000005}, {"type": "precision_at_500", "value": 4.1129999999999995}, {"type": "precision_at_700", "value": 3.2460000000000004}, {"type": "precision_at_1000", "value": 2.5260000000000002}, {"type": "mrr_at_1", "value": 75}, {"type": "mrr_at_2", "value": 78.625}, {"type": "mrr_at_3", "value": 79.708}, {"type": "mrr_at_5", "value": 80.446}, {"type": "mrr_at_7", "value": 80.862}, {"type": "mrr_at_10", "value": 81.161}, {"type": "mrr_at_20", "value": 81.3}, {"type": "mrr_at_30", "value": 81.348}, {"type": "mrr_at_50", "value": 81.361}, {"type": "mrr_at_70", "value": 81.361}, {"type": "mrr_at_100", "value": 81.361}, {"type": "mrr_at_200", "value": 81.367}, {"type": "mrr_at_300", "value": 81.367}, {"type": "mrr_at_500", "value": 81.368}, {"type": "mrr_at_700", "value": 81.368}, {"type": "mrr_at_1000", "value": 81.368}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 50.239999999999995}, {"type": "f1", "value": 46.42361822342044}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 83.723}, {"type": "ndcg_at_2", "value": 86.777}, {"type": "ndcg_at_3", "value": 87.997}, {"type": "ndcg_at_5", "value": 88.864}, {"type": "ndcg_at_7", "value": 89.143}, {"type": "ndcg_at_10", "value": 89.349}, {"type": "ndcg_at_20", "value": 89.709}, {"type": "ndcg_at_30", "value": 89.82900000000001}, {"type": "ndcg_at_50", "value": 89.923}, {"type": "ndcg_at_70", "value": 89.982}, {"type": "ndcg_at_100", "value": 90.026}, {"type": "ndcg_at_200", "value": 90.10000000000001}, {"type": "ndcg_at_300", "value": 90.12599999999999}, {"type": "ndcg_at_500", "value": 90.17399999999999}, {"type": "ndcg_at_700", "value": 90.19}, {"type": "ndcg_at_1000", "value": 90.208}, {"type": "map_at_1", "value": 77.64999999999999}, {"type": "map_at_2", "value": 83.769}, {"type": "map_at_3", "value": 85.041}, {"type": "map_at_5", "value": 85.736}, {"type": "map_at_7", "value": 85.924}, {"type": "map_at_10", "value": 86.032}, {"type": "map_at_20", "value": 86.177}, {"type": "map_at_30", "value": 86.213}, {"type": "map_at_50", "value": 86.233}, {"type": "map_at_70", "value": 86.24300000000001}, {"type": "map_at_100", "value": 86.249}, {"type": "map_at_200", "value": 86.256}, {"type": "map_at_300", "value": 86.258}, {"type": "map_at_500", "value": 86.26}, {"type": "map_at_700", "value": 86.26}, {"type": "map_at_1000", "value": 86.261}, {"type": "recall_at_1", "value": 77.64999999999999}, {"type": "recall_at_2", "value": 88.53999999999999}, {"type": "recall_at_3", "value": 91.696}, {"type": "recall_at_5", "value": 93.916}, {"type": "recall_at_7", "value": 94.731}, {"type": "recall_at_10", "value": 95.318}, {"type": "recall_at_20", "value": 96.507}, {"type": "recall_at_30", "value": 96.956}, {"type": "recall_at_50", "value": 97.34899999999999}, {"type": "recall_at_70", "value": 97.61}, {"type": "recall_at_100", "value": 97.83}, {"type": "recall_at_200", "value": 98.223}, {"type": "recall_at_300", "value": 98.374}, {"type": "recall_at_500", "value": 98.67899999999999}, {"type": "recall_at_700", "value": 98.787}, {"type": "recall_at_1000", "value": 98.919}, {"type": "precision_at_1", "value": 83.723}, {"type": "precision_at_2", "value": 48.425000000000004}, {"type": "precision_at_3", "value": 33.638}, {"type": "precision_at_5", "value": 20.843}, {"type": "precision_at_7", "value": 15.079}, {"type": "precision_at_10", "value": 10.674999999999999}, {"type": "precision_at_20", "value": 5.457999999999999}, {"type": "precision_at_30", "value": 3.6740000000000004}, {"type": "precision_at_50", "value": 2.2239999999999998}, {"type": "precision_at_70", "value": 1.599}, {"type": "precision_at_100", "value": 1.125}, {"type": "precision_at_200", "value": 0.5680000000000001}, {"type": "precision_at_300", "value": 0.38}, {"type": "precision_at_500", "value": 0.22999999999999998}, {"type": "precision_at_700", "value": 0.165}, {"type": "precision_at_1000", "value": 0.116}, {"type": "mrr_at_1", "value": 83.723}, {"type": "mrr_at_2", "value": 88.794}, {"type": "mrr_at_3", "value": 89.679}, {"type": "mrr_at_5", "value": 90.049}, {"type": "mrr_at_7", "value": 90.129}, {"type": "mrr_at_10", "value": 90.167}, {"type": "mrr_at_20", "value": 90.208}, {"type": "mrr_at_30", "value": 90.214}, {"type": "mrr_at_50", "value": 90.217}, {"type": "mrr_at_70", "value": 90.218}, {"type": "mrr_at_100", "value": 90.21900000000001}, {"type": "mrr_at_200", "value": 90.21900000000001}, {"type": "mrr_at_300", "value": 90.21900000000001}, {"type": "mrr_at_500", "value": 90.21900000000001}, {"type": "mrr_at_700", "value": 90.21900000000001}, {"type": "mrr_at_1000", "value": 90.21900000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 59.721999999999994}, {"type": "ndcg_at_2", "value": 56.85}, {"type": "ndcg_at_3", "value": 56.462999999999994}, {"type": "ndcg_at_5", "value": 57.75599999999999}, {"type": "ndcg_at_7", "value": 59.109}, {"type": "ndcg_at_10", "value": 60.402}, {"type": "ndcg_at_20", "value": 63.071999999999996}, {"type": "ndcg_at_30", "value": 64.302}, {"type": "ndcg_at_50", "value": 65.619}, {"type": "ndcg_at_70", "value": 66.161}, {"type": "ndcg_at_100", "value": 66.645}, {"type": "ndcg_at_200", "value": 67.353}, {"type": "ndcg_at_300", "value": 67.646}, {"type": "ndcg_at_500", "value": 67.852}, {"type": "ndcg_at_700", "value": 67.974}, {"type": "ndcg_at_1000", "value": 68.084}, {"type": "map_at_1", "value": 31.56}, {"type": "map_at_2", "value": 42.093}, {"type": "map_at_3", "value": 46.177}, {"type": "map_at_5", "value": 49.78}, {"type": "map_at_7", "value": 51.410999999999994}, {"type": "map_at_10", "value": 52.524}, {"type": "map_at_20", "value": 53.815000000000005}, {"type": "map_at_30", "value": 54.201}, {"type": "map_at_50", "value": 54.531}, {"type": "map_at_70", "value": 54.625}, {"type": "map_at_100", "value": 54.686}, {"type": "map_at_200", "value": 54.757999999999996}, {"type": "map_at_300", "value": 54.776}, {"type": "map_at_500", "value": 54.786}, {"type": "map_at_700", "value": 54.790000000000006}, {"type": "map_at_1000", "value": 54.793000000000006}, {"type": "recall_at_1", "value": 31.56}, {"type": "recall_at_2", "value": 44.858}, {"type": "recall_at_3", "value": 51.11}, {"type": "recall_at_5", "value": 58.394}, {"type": "recall_at_7", "value": 63.001}, {"type": "recall_at_10", "value": 66.81200000000001}, {"type": "recall_at_20", "value": 74.901}, {"type": "recall_at_30", "value": 79.218}, {"type": "recall_at_50", "value": 84.49}, {"type": "recall_at_70", "value": 87.003}, {"type": "recall_at_100", "value": 89.345}, {"type": "recall_at_200", "value": 93.173}, {"type": "recall_at_300", "value": 94.906}, {"type": "recall_at_500", "value": 96.223}, {"type": "recall_at_700", "value": 97.043}, {"type": "recall_at_1000", "value": 97.785}, {"type": "precision_at_1", "value": 59.721999999999994}, {"type": "precision_at_2", "value": 46.682}, {"type": "precision_at_3", "value": 37.602999999999994}, {"type": "precision_at_5", "value": 27.500000000000004}, {"type": "precision_at_7", "value": 21.847}, {"type": "precision_at_10", "value": 16.667}, {"type": "precision_at_20", "value": 9.545}, {"type": "precision_at_30", "value": 6.795}, {"type": "precision_at_50", "value": 4.38}, {"type": "precision_at_70", "value": 3.221}, {"type": "precision_at_100", "value": 2.319}, {"type": "precision_at_200", "value": 1.2149999999999999}, {"type": "precision_at_300", "value": 0.827}, {"type": "precision_at_500", "value": 0.504}, {"type": "precision_at_700", "value": 0.364}, {"type": "precision_at_1000", "value": 0.257}, {"type": "mrr_at_1", "value": 59.721999999999994}, {"type": "mrr_at_2", "value": 64.506}, {"type": "mrr_at_3", "value": 65.792}, {"type": "mrr_at_5", "value": 66.965}, {"type": "mrr_at_7", "value": 67.34700000000001}, {"type": "mrr_at_10", "value": 67.57}, {"type": "mrr_at_20", "value": 67.896}, {"type": "mrr_at_30", "value": 68.008}, {"type": "mrr_at_50", "value": 68.083}, {"type": "mrr_at_70", "value": 68.105}, {"type": "mrr_at_100", "value": 68.116}, {"type": "mrr_at_200", "value": 68.12700000000001}, {"type": "mrr_at_300", "value": 68.13}, {"type": "mrr_at_500", "value": 68.132}, {"type": "mrr_at_700", "value": 68.133}, {"type": "mrr_at_1000", "value": 68.133}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 81.796}, {"type": "ndcg_at_2", "value": 67.999}, {"type": "ndcg_at_3", "value": 72.15599999999999}, {"type": "ndcg_at_5", "value": 74.99900000000001}, {"type": "ndcg_at_7", "value": 76.179}, {"type": "ndcg_at_10", "value": 77.022}, {"type": "ndcg_at_20", "value": 78.173}, {"type": "ndcg_at_30", "value": 78.648}, {"type": "ndcg_at_50", "value": 79.104}, {"type": "ndcg_at_70", "value": 79.335}, {"type": "ndcg_at_100", "value": 79.56}, {"type": "ndcg_at_200", "value": 79.911}, {"type": "ndcg_at_300", "value": 80.045}, {"type": "ndcg_at_500", "value": 80.19500000000001}, {"type": "ndcg_at_700", "value": 80.281}, {"type": "ndcg_at_1000", "value": 80.35}, {"type": "map_at_1", "value": 40.898}, {"type": "map_at_2", "value": 62.016000000000005}, {"type": "map_at_3", "value": 66.121}, {"type": "map_at_5", "value": 68.471}, {"type": "map_at_7", "value": 69.261}, {"type": "map_at_10", "value": 69.738}, {"type": "map_at_20", "value": 70.208}, {"type": "map_at_30", "value": 70.343}, {"type": "map_at_50", "value": 70.43700000000001}, {"type": "map_at_70", "value": 70.47099999999999}, {"type": "map_at_100", "value": 70.498}, {"type": "map_at_200", "value": 70.526}, {"type": "map_at_300", "value": 70.533}, {"type": "map_at_500", "value": 70.538}, {"type": "map_at_700", "value": 70.541}, {"type": "map_at_1000", "value": 70.542}, {"type": "recall_at_1", "value": 40.898}, {"type": "recall_at_2", "value": 63.964}, {"type": "recall_at_3", "value": 70.743}, {"type": "recall_at_5", "value": 76.36699999999999}, {"type": "recall_at_7", "value": 79.142}, {"type": "recall_at_10", "value": 81.404}, {"type": "recall_at_20", "value": 85.111}, {"type": "recall_at_30", "value": 86.92800000000001}, {"type": "recall_at_50", "value": 88.899}, {"type": "recall_at_70", "value": 90.01400000000001}, {"type": "recall_at_100", "value": 91.19500000000001}, {"type": "recall_at_200", "value": 93.234}, {"type": "recall_at_300", "value": 94.105}, {"type": "recall_at_500", "value": 95.159}, {"type": "recall_at_700", "value": 95.8}, {"type": "recall_at_1000", "value": 96.34700000000001}, {"type": "precision_at_1", "value": 81.796}, {"type": "precision_at_2", "value": 63.964}, {"type": "precision_at_3", "value": 47.162}, {"type": "precision_at_5", "value": 30.547}, {"type": "precision_at_7", "value": 22.612}, {"type": "precision_at_10", "value": 16.281000000000002}, {"type": "precision_at_20", "value": 8.511000000000001}, {"type": "precision_at_30", "value": 5.795}, {"type": "precision_at_50", "value": 3.556}, {"type": "precision_at_70", "value": 2.572}, {"type": "precision_at_100", "value": 1.8239999999999998}, {"type": "precision_at_200", "value": 0.932}, {"type": "precision_at_300", "value": 0.627}, {"type": "precision_at_500", "value": 0.381}, {"type": "precision_at_700", "value": 0.27399999999999997}, {"type": "precision_at_1000", "value": 0.193}, {"type": "mrr_at_1", "value": 81.796}, {"type": "mrr_at_2", "value": 85.69200000000001}, {"type": "mrr_at_3", "value": 86.52}, {"type": "mrr_at_5", "value": 86.973}, {"type": "mrr_at_7", "value": 87.13300000000001}, {"type": "mrr_at_10", "value": 87.208}, {"type": "mrr_at_20", "value": 87.303}, {"type": "mrr_at_30", "value": 87.32799999999999}, {"type": "mrr_at_50", "value": 87.347}, {"type": "mrr_at_70", "value": 87.35199999999999}, {"type": "mrr_at_100", "value": 87.355}, {"type": "mrr_at_200", "value": 87.357}, {"type": "mrr_at_300", "value": 87.357}, {"type": "mrr_at_500", "value": 87.358}, {"type": "mrr_at_700", "value": 87.358}, {"type": "mrr_at_1000", "value": 87.358}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 94.79200000000002}, {"type": "ap", "value": 92.54484356773553}, {"type": "f1", "value": 94.78965313682525}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 24.398}, {"type": "ndcg_at_2", "value": 31.336000000000002}, {"type": "ndcg_at_3", "value": 35.266999999999996}, {"type": "ndcg_at_5", "value": 39.356}, {"type": "ndcg_at_7", "value": 41.562}, {"type": "ndcg_at_10", "value": 43.408}, {"type": "ndcg_at_20", "value": 46.107}, {"type": "ndcg_at_30", "value": 47.164}, {"type": "ndcg_at_50", "value": 48.126000000000005}, {"type": "ndcg_at_70", "value": 48.626999999999995}, {"type": "ndcg_at_100", "value": 49.043}, {"type": "ndcg_at_200", "value": 49.575}, {"type": "ndcg_at_300", "value": 49.794}, {"type": "ndcg_at_500", "value": 49.942}, {"type": "ndcg_at_700", "value": 50.014}, {"type": "ndcg_at_1000", "value": 50.077000000000005}, {"type": "map_at_1", "value": 23.723}, {"type": "map_at_2", "value": 29.593000000000004}, {"type": "map_at_3", "value": 32.273}, {"type": "map_at_5", "value": 34.587}, {"type": "map_at_7", "value": 35.589999999999996}, {"type": "map_at_10", "value": 36.296}, {"type": "map_at_20", "value": 37.059999999999995}, {"type": "map_at_30", "value": 37.265}, {"type": "map_at_50", "value": 37.402}, {"type": "map_at_70", "value": 37.454}, {"type": "map_at_100", "value": 37.486999999999995}, {"type": "map_at_200", "value": 37.516}, {"type": "map_at_300", "value": 37.524}, {"type": "map_at_500", "value": 37.528}, {"type": "map_at_700", "value": 37.529}, {"type": "map_at_1000", "value": 37.53}, {"type": "recall_at_1", "value": 23.723}, {"type": "recall_at_2", "value": 35.355}, {"type": "recall_at_3", "value": 43.22}, {"type": "recall_at_5", "value": 53.025}, {"type": "recall_at_7", "value": 59.327}, {"type": "recall_at_10", "value": 65.302}, {"type": "recall_at_20", "value": 75.765}, {"type": "recall_at_30", "value": 80.632}, {"type": "recall_at_50", "value": 85.63499999999999}, {"type": "recall_at_70", "value": 88.554}, {"type": "recall_at_100", "value": 91.16300000000001}, {"type": "recall_at_200", "value": 94.85}, {"type": "recall_at_300", "value": 96.532}, {"type": "recall_at_500", "value": 97.751}, {"type": "recall_at_700", "value": 98.383}, {"type": "recall_at_1000", "value": 98.97}, {"type": "precision_at_1", "value": 24.398}, {"type": "precision_at_2", "value": 18.274}, {"type": "precision_at_3", "value": 14.951999999999998}, {"type": "precision_at_5", "value": 11.052}, {"type": "precision_at_7", "value": 8.84}, {"type": "precision_at_10", "value": 6.8309999999999995}, {"type": "precision_at_20", "value": 3.978}, {"type": "precision_at_30", "value": 2.827}, {"type": "precision_at_50", "value": 1.807}, {"type": "precision_at_70", "value": 1.336}, {"type": "precision_at_100", "value": 0.964}, {"type": "precision_at_200", "value": 0.502}, {"type": "precision_at_300", "value": 0.34099999999999997}, {"type": "precision_at_500", "value": 0.208}, {"type": "precision_at_700", "value": 0.15}, {"type": "precision_at_1000", "value": 0.105}, {"type": "mrr_at_1", "value": 24.398}, {"type": "mrr_at_2", "value": 30.351}, {"type": "mrr_at_3", "value": 33.001000000000005}, {"type": "mrr_at_5", "value": 35.228}, {"type": "mrr_at_7", "value": 36.223}, {"type": "mrr_at_10", "value": 36.903999999999996}, {"type": "mrr_at_20", "value": 37.631}, {"type": "mrr_at_30", "value": 37.830000000000005}, {"type": "mrr_at_50", "value": 37.955}, {"type": "mrr_at_70", "value": 38.003}, {"type": "mrr_at_100", "value": 38.033}, {"type": "mrr_at_200", "value": 38.059}, {"type": "mrr_at_300", "value": 38.066}, {"type": "mrr_at_500", "value": 38.068999999999996}, {"type": "mrr_at_700", "value": 38.07}, {"type": "mrr_at_1000", "value": 38.07}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 96.35658914728683}, {"type": "f1", "value": 96.15039630903114}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 86.29730962152303}, {"type": "f1", "value": 71.12166316567485}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 79.98991257565568}, {"type": "f1", "value": 77.41680115095276}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 82.1990585070612}, {"type": "f1", "value": 82.23719179179362}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 40.03019554933584}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 38.999760551497815}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 32.72383151953079}, {"type": "mrr", "value": 33.93989699030721}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 51.858000000000004}, {"type": "ndcg_at_2", "value": 49.675999999999995}, {"type": "ndcg_at_3", "value": 47.519}, {"type": "ndcg_at_5", "value": 45.198}, {"type": "ndcg_at_7", "value": 43.504}, {"type": "ndcg_at_10", "value": 41.88}, {"type": "ndcg_at_20", "value": 39.122}, {"type": "ndcg_at_30", "value": 37.95}, {"type": "ndcg_at_50", "value": 37.602999999999994}, {"type": "ndcg_at_70", "value": 37.836}, {"type": "ndcg_at_100", "value": 38.493}, {"type": "ndcg_at_200", "value": 40.187}, {"type": "ndcg_at_300", "value": 41.524}, {"type": "ndcg_at_500", "value": 43.657000000000004}, {"type": "ndcg_at_700", "value": 45.234}, {"type": "ndcg_at_1000", "value": 47.047}, {"type": "map_at_1", "value": 6.392}, {"type": "map_at_2", "value": 10.113}, {"type": "map_at_3", "value": 11.543000000000001}, {"type": "map_at_5", "value": 13.729}, {"type": "map_at_7", "value": 14.985000000000001}, {"type": "map_at_10", "value": 16.217000000000002}, {"type": "map_at_20", "value": 18.106}, {"type": "map_at_30", "value": 18.878}, {"type": "map_at_50", "value": 19.822}, {"type": "map_at_70", "value": 20.352999999999998}, {"type": "map_at_100", "value": 20.827}, {"type": "map_at_200", "value": 21.512}, {"type": "map_at_300", "value": 21.826}, {"type": "map_at_500", "value": 22.155}, {"type": "map_at_700", "value": 22.349}, {"type": "map_at_1000", "value": 22.531000000000002}, {"type": "recall_at_1", "value": 6.392}, {"type": "recall_at_2", "value": 11.215}, {"type": "recall_at_3", "value": 13.231000000000002}, {"type": "recall_at_5", "value": 16.66}, {"type": "recall_at_7", "value": 18.802}, {"type": "recall_at_10", "value": 21.185000000000002}, {"type": "recall_at_20", "value": 25.35}, {"type": "recall_at_30", "value": 27.91}, {"type": "recall_at_50", "value": 32.845}, {"type": "recall_at_70", "value": 35.789}, {"type": "recall_at_100", "value": 39.247}, {"type": "recall_at_200", "value": 46.655}, {"type": "recall_at_300", "value": 51.43299999999999}, {"type": "recall_at_500", "value": 59.472}, {"type": "recall_at_700", "value": 64.742}, {"type": "recall_at_1000", "value": 70.97099999999999}, {"type": "precision_at_1", "value": 53.559999999999995}, {"type": "precision_at_2", "value": 48.762}, {"type": "precision_at_3", "value": 44.169000000000004}, {"type": "precision_at_5", "value": 39.071}, {"type": "precision_at_7", "value": 35.161}, {"type": "precision_at_10", "value": 31.238}, {"type": "precision_at_20", "value": 23.064999999999998}, {"type": "precision_at_30", "value": 18.844}, {"type": "precision_at_50", "value": 14.601}, {"type": "precision_at_70", "value": 12.088000000000001}, {"type": "precision_at_100", "value": 9.844999999999999}, {"type": "precision_at_200", "value": 6.358}, {"type": "precision_at_300", "value": 4.915}, {"type": "precision_at_500", "value": 3.531}, {"type": "precision_at_700", "value": 2.8649999999999998}, {"type": "precision_at_1000", "value": 2.289}, {"type": "mrr_at_1", "value": 54.17999999999999}, {"type": "mrr_at_2", "value": 59.288}, {"type": "mrr_at_3", "value": 60.836}, {"type": "mrr_at_5", "value": 62.275999999999996}, {"type": "mrr_at_7", "value": 62.688}, {"type": "mrr_at_10", "value": 62.865}, {"type": "mrr_at_20", "value": 63.11}, {"type": "mrr_at_30", "value": 63.193999999999996}, {"type": "mrr_at_50", "value": 63.258}, {"type": "mrr_at_70", "value": 63.278}, {"type": "mrr_at_100", "value": 63.297000000000004}, {"type": "mrr_at_200", "value": 63.315999999999995}, {"type": "mrr_at_300", "value": 63.318}, {"type": "mrr_at_500", "value": 63.32299999999999}, {"type": "mrr_at_700", "value": 63.324000000000005}, {"type": "mrr_at_1000", "value": 63.324999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 50.897999999999996}, {"type": "ndcg_at_2", "value": 59.126}, {"type": "ndcg_at_3", "value": 63.093999999999994}, {"type": "ndcg_at_5", "value": 67.197}, {"type": "ndcg_at_7", "value": 68.719}, {"type": "ndcg_at_10", "value": 69.915}, {"type": "ndcg_at_20", "value": 71.229}, {"type": "ndcg_at_30", "value": 71.667}, {"type": "ndcg_at_50", "value": 71.98}, {"type": "ndcg_at_70", "value": 72.127}, {"type": "ndcg_at_100", "value": 72.217}, {"type": "ndcg_at_200", "value": 72.319}, {"type": "ndcg_at_300", "value": 72.347}, {"type": "ndcg_at_500", "value": 72.37}, {"type": "ndcg_at_700", "value": 72.379}, {"type": "ndcg_at_1000", "value": 72.381}, {"type": "map_at_1", "value": 45.297}, {"type": "map_at_2", "value": 55.596000000000004}, {"type": "map_at_3", "value": 58.724}, {"type": "map_at_5", "value": 61.387}, {"type": "map_at_7", "value": 62.173}, {"type": "map_at_10", "value": 62.69}, {"type": "map_at_20", "value": 63.125}, {"type": "map_at_30", "value": 63.223}, {"type": "map_at_50", "value": 63.27700000000001}, {"type": "map_at_70", "value": 63.295}, {"type": "map_at_100", "value": 63.303}, {"type": "map_at_200", "value": 63.31}, {"type": "map_at_300", "value": 63.31099999999999}, {"type": "map_at_500", "value": 63.312000000000005}, {"type": "map_at_700", "value": 63.312000000000005}, {"type": "map_at_1000", "value": 63.312000000000005}, {"type": "recall_at_1", "value": 45.297}, {"type": "recall_at_2", "value": 63.866}, {"type": "recall_at_3", "value": 71.898}, {"type": "recall_at_5", "value": 81.16600000000001}, {"type": "recall_at_7", "value": 85.301}, {"type": "recall_at_10", "value": 88.94800000000001}, {"type": "recall_at_20", "value": 93.719}, {"type": "recall_at_30", "value": 95.628}, {"type": "recall_at_50", "value": 97.14699999999999}, {"type": "recall_at_70", "value": 97.955}, {"type": "recall_at_100", "value": 98.48599999999999}, {"type": "recall_at_200", "value": 99.157}, {"type": "recall_at_300", "value": 99.355}, {"type": "recall_at_500", "value": 99.53699999999999}, {"type": "recall_at_700", "value": 99.62299999999999}, {"type": "recall_at_1000", "value": 99.638}, {"type": "precision_at_1", "value": 50.897999999999996}, {"type": "precision_at_2", "value": 36.703}, {"type": "precision_at_3", "value": 27.926000000000002}, {"type": "precision_at_5", "value": 19.276}, {"type": "precision_at_7", "value": 14.533999999999999}, {"type": "precision_at_10", "value": 10.678}, {"type": "precision_at_20", "value": 5.663}, {"type": "precision_at_30", "value": 3.8600000000000003}, {"type": "precision_at_50", "value": 2.358}, {"type": "precision_at_70", "value": 1.7000000000000002}, {"type": "precision_at_100", "value": 1.198}, {"type": "precision_at_200", "value": 0.603}, {"type": "precision_at_300", "value": 0.40299999999999997}, {"type": "precision_at_500", "value": 0.242}, {"type": "precision_at_700", "value": 0.173}, {"type": "precision_at_1000", "value": 0.121}, {"type": "mrr_at_1", "value": 50.897999999999996}, {"type": "mrr_at_2", "value": 59.994}, {"type": "mrr_at_3", "value": 62.553000000000004}, {"type": "mrr_at_5", "value": 64.307}, {"type": "mrr_at_7", "value": 64.864}, {"type": "mrr_at_10", "value": 65.22200000000001}, {"type": "mrr_at_20", "value": 65.499}, {"type": "mrr_at_30", "value": 65.561}, {"type": "mrr_at_50", "value": 65.592}, {"type": "mrr_at_70", "value": 65.602}, {"type": "mrr_at_100", "value": 65.607}, {"type": "mrr_at_200", "value": 65.61099999999999}, {"type": "mrr_at_300", "value": 65.61200000000001}, {"type": "mrr_at_500", "value": 65.61200000000001}, {"type": "mrr_at_700", "value": 65.61200000000001}, {"type": "mrr_at_1000", "value": 65.61200000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 82.96}, {"type": "ndcg_at_2", "value": 85.614}, {"type": "ndcg_at_3", "value": 87.19}, {"type": "ndcg_at_5", "value": 88.654}, {"type": "ndcg_at_7", "value": 89.287}, {"type": "ndcg_at_10", "value": 89.785}, {"type": "ndcg_at_20", "value": 90.384}, {"type": "ndcg_at_30", "value": 90.589}, {"type": "ndcg_at_50", "value": 90.738}, {"type": "ndcg_at_70", "value": 90.789}, {"type": "ndcg_at_100", "value": 90.824}, {"type": "ndcg_at_200", "value": 90.869}, {"type": "ndcg_at_300", "value": 90.881}, {"type": "ndcg_at_500", "value": 90.886}, {"type": "ndcg_at_700", "value": 90.889}, {"type": "ndcg_at_1000", "value": 90.889}, {"type": "map_at_1", "value": 72.152}, {"type": "map_at_2", "value": 80.818}, {"type": "map_at_3", "value": 83.462}, {"type": "map_at_5", "value": 85.286}, {"type": "map_at_7", "value": 85.921}, {"type": "map_at_10", "value": 86.334}, {"type": "map_at_20", "value": 86.737}, {"type": "map_at_30", "value": 86.847}, {"type": "map_at_50", "value": 86.911}, {"type": "map_at_70", "value": 86.932}, {"type": "map_at_100", "value": 86.943}, {"type": "map_at_200", "value": 86.953}, {"type": "map_at_300", "value": 86.955}, {"type": "map_at_500", "value": 86.956}, {"type": "map_at_700", "value": 86.956}, {"type": "map_at_1000", "value": 86.956}, {"type": "recall_at_1", "value": 72.152}, {"type": "recall_at_2", "value": 84.129}, {"type": "recall_at_3", "value": 88.87}, {"type": "recall_at_5", "value": 93.067}, {"type": "recall_at_7", "value": 94.882}, {"type": "recall_at_10", "value": 96.353}, {"type": "recall_at_20", "value": 98.26700000000001}, {"type": "recall_at_30", "value": 98.92999999999999}, {"type": "recall_at_50", "value": 99.441}, {"type": "recall_at_70", "value": 99.619}, {"type": "recall_at_100", "value": 99.748}, {"type": "recall_at_200", "value": 99.911}, {"type": "recall_at_300", "value": 99.956}, {"type": "recall_at_500", "value": 99.98}, {"type": "recall_at_700", "value": 99.991}, {"type": "recall_at_1000", "value": 99.996}, {"type": "precision_at_1", "value": 82.96}, {"type": "precision_at_2", "value": 52.175000000000004}, {"type": "precision_at_3", "value": 38.223}, {"type": "precision_at_5", "value": 25.056}, {"type": "precision_at_7", "value": 18.717}, {"type": "precision_at_10", "value": 13.614999999999998}, {"type": "precision_at_20", "value": 7.208}, {"type": "precision_at_30", "value": 4.928}, {"type": "precision_at_50", "value": 3.024}, {"type": "precision_at_70", "value": 2.183}, {"type": "precision_at_100", "value": 1.54}, {"type": "precision_at_200", "value": 0.779}, {"type": "precision_at_300", "value": 0.521}, {"type": "precision_at_500", "value": 0.313}, {"type": "precision_at_700", "value": 0.22399999999999998}, {"type": "precision_at_1000", "value": 0.157}, {"type": "mrr_at_1", "value": 82.96}, {"type": "mrr_at_2", "value": 87.005}, {"type": "mrr_at_3", "value": 88.07199999999999}, {"type": "mrr_at_5", "value": 88.634}, {"type": "mrr_at_7", "value": 88.793}, {"type": "mrr_at_10", "value": 88.87899999999999}, {"type": "mrr_at_20", "value": 88.94999999999999}, {"type": "mrr_at_30", "value": 88.96}, {"type": "mrr_at_50", "value": 88.965}, {"type": "mrr_at_70", "value": 88.966}, {"type": "mrr_at_100", "value": 88.967}, {"type": "mrr_at_200", "value": 88.967}, {"type": "mrr_at_300", "value": 88.967}, {"type": "mrr_at_500", "value": 88.967}, {"type": "mrr_at_700", "value": 88.967}, {"type": "mrr_at_1000", "value": 88.967}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 59.90388554491155}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 67.64232539036783}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 22.6}, {"type": "ndcg_at_2", "value": 20.355999999999998}, {"type": "ndcg_at_3", "value": 18.536}, {"type": "ndcg_at_5", "value": 16.523}, {"type": "ndcg_at_7", "value": 17.979}, {"type": "ndcg_at_10", "value": 19.908}, {"type": "ndcg_at_20", "value": 22.887}, {"type": "ndcg_at_30", "value": 24.43}, {"type": "ndcg_at_50", "value": 25.959}, {"type": "ndcg_at_70", "value": 26.989}, {"type": "ndcg_at_100", "value": 27.977}, {"type": "ndcg_at_200", "value": 29.831000000000003}, {"type": "ndcg_at_300", "value": 30.787}, {"type": "ndcg_at_500", "value": 31.974999999999998}, {"type": "ndcg_at_700", "value": 32.554}, {"type": "ndcg_at_1000", "value": 33.277}, {"type": "map_at_1", "value": 4.593}, {"type": "map_at_2", "value": 6.923}, {"type": "map_at_3", "value": 8.3}, {"type": "map_at_5", "value": 10.072000000000001}, {"type": "map_at_7", "value": 10.782}, {"type": "map_at_10", "value": 11.72}, {"type": "map_at_20", "value": 12.838}, {"type": "map_at_30", "value": 13.257}, {"type": "map_at_50", "value": 13.569}, {"type": "map_at_70", "value": 13.733}, {"type": "map_at_100", "value": 13.858999999999998}, {"type": "map_at_200", "value": 14.018}, {"type": "map_at_300", "value": 14.072999999999999}, {"type": "map_at_500", "value": 14.126}, {"type": "map_at_700", "value": 14.145}, {"type": "map_at_1000", "value": 14.161999999999999}, {"type": "recall_at_1", "value": 4.593}, {"type": "recall_at_2", "value": 7.997999999999999}, {"type": "recall_at_3", "value": 10.563}, {"type": "recall_at_5", "value": 14.907}, {"type": "recall_at_7", "value": 17.4}, {"type": "recall_at_10", "value": 21.18}, {"type": "recall_at_20", "value": 28.144999999999996}, {"type": "recall_at_30", "value": 32.462}, {"type": "recall_at_50", "value": 37.267}, {"type": "recall_at_70", "value": 40.875}, {"type": "recall_at_100", "value": 44.641999999999996}, {"type": "recall_at_200", "value": 52.573}, {"type": "recall_at_300", "value": 57.089999999999996}, {"type": "recall_at_500", "value": 63.14300000000001}, {"type": "recall_at_700", "value": 66.313}, {"type": "recall_at_1000", "value": 70.458}, {"type": "precision_at_1", "value": 22.6}, {"type": "precision_at_2", "value": 19.7}, {"type": "precision_at_3", "value": 17.333000000000002}, {"type": "precision_at_5", "value": 14.680000000000001}, {"type": "precision_at_7", "value": 12.243}, {"type": "precision_at_10", "value": 10.440000000000001}, {"type": "precision_at_20", "value": 6.944999999999999}, {"type": "precision_at_30", "value": 5.333}, {"type": "precision_at_50", "value": 3.678}, {"type": "precision_at_70", "value": 2.881}, {"type": "precision_at_100", "value": 2.2030000000000003}, {"type": "precision_at_200", "value": 1.295}, {"type": "precision_at_300", "value": 0.9369999999999999}, {"type": "precision_at_500", "value": 0.622}, {"type": "precision_at_700", "value": 0.466}, {"type": "precision_at_1000", "value": 0.347}, {"type": "mrr_at_1", "value": 22.6}, {"type": "mrr_at_2", "value": 27.900000000000002}, {"type": "mrr_at_3", "value": 30.067}, {"type": "mrr_at_5", "value": 32.207}, {"type": "mrr_at_7", "value": 33.004}, {"type": "mrr_at_10", "value": 33.596}, {"type": "mrr_at_20", "value": 34.268}, {"type": "mrr_at_30", "value": 34.492}, {"type": "mrr_at_50", "value": 34.628}, {"type": "mrr_at_70", "value": 34.681}, {"type": "mrr_at_100", "value": 34.717}, {"type": "mrr_at_200", "value": 34.757}, {"type": "mrr_at_300", "value": 34.768}, {"type": "mrr_at_500", "value": 34.772}, {"type": "mrr_at_700", "value": 34.774}, {"type": "mrr_at_1000", "value": 34.775}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.90122745229677}, {"type": "cos_sim_spearman", "value": 82.92294737327579}, {"type": "euclidean_pearson", "value": 84.08979655773187}, {"type": "euclidean_spearman", "value": 82.92294657285412}, {"type": "manhattan_pearson", "value": 84.09347480531832}, {"type": "manhattan_spearman", "value": 82.91564613948087}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.01218713698583}, {"type": "cos_sim_spearman", "value": 79.46865215168464}, {"type": "euclidean_pearson", "value": 83.22621889891909}, {"type": "euclidean_spearman", "value": 79.46853821709514}, {"type": "manhattan_pearson", "value": 83.69962580788805}, {"type": "manhattan_spearman", "value": 79.9561593356932}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.98438696342964}, {"type": "cos_sim_spearman", "value": 89.15419511870839}, {"type": "euclidean_pearson", "value": 88.49646141802894}, {"type": "euclidean_spearman", "value": 89.15419503946019}, {"type": "manhattan_pearson", "value": 88.6420585616327}, {"type": "manhattan_spearman", "value": 89.42648950757743}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.30772547759544}, {"type": "cos_sim_spearman", "value": 84.93199878424691}, {"type": "euclidean_pearson", "value": 86.16266630395455}, {"type": "euclidean_spearman", "value": 84.93198798543634}, {"type": "manhattan_pearson", "value": 86.14285723189803}, {"type": "manhattan_spearman", "value": 85.0361672522687}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 90.21342071197127}, {"type": "cos_sim_spearman", "value": 90.7407512744838}, {"type": "euclidean_pearson", "value": 90.1517933113061}, {"type": "euclidean_spearman", "value": 90.74075125431919}, {"type": "manhattan_pearson", "value": 90.17963034676193}, {"type": "manhattan_spearman", "value": 90.88999275865135}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.82518054100498}, {"type": "cos_sim_spearman", "value": 87.81570533154735}, {"type": "euclidean_pearson", "value": 86.91684561573618}, {"type": "euclidean_spearman", "value": 87.81570533154735}, {"type": "manhattan_pearson", "value": 86.98311935744032}, {"type": "manhattan_spearman", "value": 87.9594667151966}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 92.09578436612053}, {"type": "cos_sim_spearman", "value": 92.01519349090438}, {"type": "euclidean_pearson", "value": 92.07113635890894}, {"type": "euclidean_spearman", "value": 92.01519349090438}, {"type": "manhattan_pearson", "value": 91.89343820765625}, {"type": "manhattan_spearman", "value": 91.7443476810177}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 69.29997751464549}, {"type": "cos_sim_spearman", "value": 68.36425436812782}, {"type": "euclidean_pearson", "value": 69.81381677661783}, {"type": "euclidean_spearman", "value": 68.36425436812782}, {"type": "manhattan_pearson", "value": 69.92823397008026}, {"type": "manhattan_spearman", "value": 68.35770640039254}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.39126315452359}, {"type": "cos_sim_spearman", "value": 88.99708463265337}, {"type": "euclidean_pearson", "value": 88.60793820038607}, {"type": "euclidean_spearman", "value": 88.99708463265337}, {"type": "manhattan_pearson", "value": 88.69860633571047}, {"type": "manhattan_spearman", "value": 89.20094593888012}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 86.58028062818582}, {"type": "mrr", "value": 96.53586790841693}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 66.333}, {"type": "ndcg_at_2", "value": 70.655}, {"type": "ndcg_at_3", "value": 72.801}, {"type": "ndcg_at_5", "value": 75.793}, {"type": "ndcg_at_7", "value": 76.946}, {"type": "ndcg_at_10", "value": 77.66199999999999}, {"type": "ndcg_at_20", "value": 78.786}, {"type": "ndcg_at_30", "value": 79.066}, {"type": "ndcg_at_50", "value": 79.255}, {"type": "ndcg_at_70", "value": 79.423}, {"type": "ndcg_at_100", "value": 79.476}, {"type": "ndcg_at_200", "value": 79.65299999999999}, {"type": "ndcg_at_300", "value": 79.696}, {"type": "ndcg_at_500", "value": 79.73599999999999}, {"type": "ndcg_at_700", "value": 79.77199999999999}, {"type": "ndcg_at_1000", "value": 79.77199999999999}, {"type": "map_at_1", "value": 63.383}, {"type": "map_at_2", "value": 68.144}, {"type": "map_at_3", "value": 70.19800000000001}, {"type": "map_at_5", "value": 72.38}, {"type": "map_at_7", "value": 72.955}, {"type": "map_at_10", "value": 73.312}, {"type": "map_at_20", "value": 73.678}, {"type": "map_at_30", "value": 73.72800000000001}, {"type": "map_at_50", "value": 73.75500000000001}, {"type": "map_at_70", "value": 73.771}, {"type": "map_at_100", "value": 73.776}, {"type": "map_at_200", "value": 73.783}, {"type": "map_at_300", "value": 73.784}, {"type": "map_at_500", "value": 73.785}, {"type": "map_at_700", "value": 73.786}, {"type": "map_at_1000", "value": 73.786}, {"type": "recall_at_1", "value": 63.383}, {"type": "recall_at_2", "value": 72.283}, {"type": "recall_at_3", "value": 77.183}, {"type": "recall_at_5", "value": 84.56099999999999}, {"type": "recall_at_7", "value": 87.67200000000001}, {"type": "recall_at_10", "value": 89.822}, {"type": "recall_at_20", "value": 94}, {"type": "recall_at_30", "value": 95.333}, {"type": "recall_at_50", "value": 96.333}, {"type": "recall_at_70", "value": 97.333}, {"type": "recall_at_100", "value": 97.667}, {"type": "recall_at_200", "value": 99}, {"type": "recall_at_300", "value": 99.333}, {"type": "recall_at_500", "value": 99.667}, {"type": "recall_at_700", "value": 100}, {"type": "recall_at_1000", "value": 100}, {"type": "precision_at_1", "value": 66.333}, {"type": "precision_at_2", "value": 38.667}, {"type": "precision_at_3", "value": 28.111000000000004}, {"type": "precision_at_5", "value": 18.933}, {"type": "precision_at_7", "value": 14.094999999999999}, {"type": "precision_at_10", "value": 10.167}, {"type": "precision_at_20", "value": 5.35}, {"type": "precision_at_30", "value": 3.611}, {"type": "precision_at_50", "value": 2.1870000000000003}, {"type": "precision_at_70", "value": 1.576}, {"type": "precision_at_100", "value": 1.107}, {"type": "precision_at_200", "value": 0.5599999999999999}, {"type": "precision_at_300", "value": 0.374}, {"type": "precision_at_500", "value": 0.22499999999999998}, {"type": "precision_at_700", "value": 0.161}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "mrr_at_1", "value": 66.333}, {"type": "mrr_at_2", "value": 70.833}, {"type": "mrr_at_3", "value": 72.167}, {"type": "mrr_at_5", "value": 73.6}, {"type": "mrr_at_7", "value": 74.084}, {"type": "mrr_at_10", "value": 74.283}, {"type": "mrr_at_20", "value": 74.54499999999999}, {"type": "mrr_at_30", "value": 74.59599999999999}, {"type": "mrr_at_50", "value": 74.622}, {"type": "mrr_at_70", "value": 74.639}, {"type": "mrr_at_100", "value": 74.643}, {"type": "mrr_at_200", "value": 74.65}, {"type": "mrr_at_300", "value": 74.652}, {"type": "mrr_at_500", "value": 74.653}, {"type": "mrr_at_700", "value": 74.653}, {"type": "mrr_at_1000", "value": 74.653}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.84554455445544}, {"type": "cos_sim_ap", "value": 96.31178339136798}, {"type": "cos_sim_f1", "value": 92.1921921921922}, {"type": "cos_sim_precision", "value": 92.28456913827655}, {"type": "cos_sim_recall", "value": 92.10000000000001}, {"type": "dot_accuracy", "value": 99.84554455445544}, {"type": "dot_ap", "value": 96.31178339136797}, {"type": "dot_f1", "value": 92.1921921921922}, {"type": "dot_precision", "value": 92.28456913827655}, {"type": "dot_recall", "value": 92.10000000000001}, {"type": "euclidean_accuracy", "value": 99.84554455445544}, {"type": "euclidean_ap", "value": 96.31178339136798}, {"type": "euclidean_f1", "value": 92.1921921921922}, {"type": "euclidean_precision", "value": 92.28456913827655}, {"type": "euclidean_recall", "value": 92.10000000000001}, {"type": "manhattan_accuracy", "value": 99.84752475247525}, {"type": "manhattan_ap", "value": 96.4591954606088}, {"type": "manhattan_f1", "value": 92.25352112676056}, {"type": "manhattan_precision", "value": 92.81376518218623}, {"type": "manhattan_recall", "value": 91.7}, {"type": "max_accuracy", "value": 99.84752475247525}, {"type": "max_ap", "value": 96.4591954606088}, {"type": "max_f1", "value": 92.25352112676056}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 74.24659759283294}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 46.77690051260451}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 55.68436757803185}, {"type": "mrr", "value": 56.82157711569475}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 31.652482405629843}, {"type": "cos_sim_spearman", "value": 31.16341822347735}, {"type": "dot_pearson", "value": 31.652479892699837}, {"type": "dot_spearman", "value": 31.16341822347735}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 92}, {"type": "ndcg_at_2", "value": 90.839}, {"type": "ndcg_at_3", "value": 90.642}, {"type": "ndcg_at_5", "value": 90.348}, {"type": "ndcg_at_7", "value": 89.015}, {"type": "ndcg_at_10", "value": 87.599}, {"type": "ndcg_at_20", "value": 84.434}, {"type": "ndcg_at_30", "value": 81.655}, {"type": "ndcg_at_50", "value": 77.278}, {"type": "ndcg_at_70", "value": 73.957}, {"type": "ndcg_at_100", "value": 69.56}, {"type": "ndcg_at_200", "value": 60.724000000000004}, {"type": "ndcg_at_300", "value": 57.245000000000005}, {"type": "ndcg_at_500", "value": 56.316}, {"type": "ndcg_at_700", "value": 58.399}, {"type": "ndcg_at_1000", "value": 62.21600000000001}, {"type": "map_at_1", "value": 0.247}, {"type": "map_at_2", "value": 0.488}, {"type": "map_at_3", "value": 0.7230000000000001}, {"type": "map_at_5", "value": 1.204}, {"type": "map_at_7", "value": 1.6500000000000001}, {"type": "map_at_10", "value": 2.292}, {"type": "map_at_20", "value": 4.274}, {"type": "map_at_30", "value": 6.027}, {"type": "map_at_50", "value": 9.083}, {"type": "map_at_70", "value": 11.751000000000001}, {"type": "map_at_100", "value": 14.912}, {"type": "map_at_200", "value": 22.213}, {"type": "map_at_300", "value": 26.667999999999996}, {"type": "map_at_500", "value": 31.556}, {"type": "map_at_700", "value": 34.221000000000004}, {"type": "map_at_1000", "value": 36.443999999999996}, {"type": "recall_at_1", "value": 0.247}, {"type": "recall_at_2", "value": 0.49899999999999994}, {"type": "recall_at_3", "value": 0.742}, {"type": "recall_at_5", "value": 1.247}, {"type": "recall_at_7", "value": 1.722}, {"type": "recall_at_10", "value": 2.405}, {"type": "recall_at_20", "value": 4.583}, {"type": "recall_at_30", "value": 6.587999999999999}, {"type": "recall_at_50", "value": 10.188}, {"type": "recall_at_70", "value": 13.496}, {"type": "recall_at_100", "value": 17.578}, {"type": "recall_at_200", "value": 28.158}, {"type": "recall_at_300", "value": 35.532000000000004}, {"type": "recall_at_500", "value": 45.31}, {"type": "recall_at_700", "value": 51.822}, {"type": "recall_at_1000", "value": 58.53}, {"type": "precision_at_1", "value": 96}, {"type": "precision_at_2", "value": 96}, {"type": "precision_at_3", "value": 95.333}, {"type": "precision_at_5", "value": 94.8}, {"type": "precision_at_7", "value": 93.429}, {"type": "precision_at_10", "value": 91.4}, {"type": "precision_at_20", "value": 87.7}, {"type": "precision_at_30", "value": 84.867}, {"type": "precision_at_50", "value": 80.24}, {"type": "precision_at_70", "value": 76.371}, {"type": "precision_at_100", "value": 71.08}, {"type": "precision_at_200", "value": 59.4}, {"type": "precision_at_300", "value": 51.459999999999994}, {"type": "precision_at_500", "value": 40.644000000000005}, {"type": "precision_at_700", "value": 33.889}, {"type": "precision_at_1000", "value": 27.250000000000004}, {"type": "mrr_at_1", "value": 96}, {"type": "mrr_at_2", "value": 98}, {"type": "mrr_at_3", "value": 98}, {"type": "mrr_at_5", "value": 98}, {"type": "mrr_at_7", "value": 98}, {"type": "mrr_at_10", "value": 98}, {"type": "mrr_at_20", "value": 98}, {"type": "mrr_at_30", "value": 98}, {"type": "mrr_at_50", "value": 98}, {"type": "mrr_at_70", "value": 98}, {"type": "mrr_at_100", "value": 98}, {"type": "mrr_at_200", "value": 98}, {"type": "mrr_at_300", "value": 98}, {"type": "mrr_at_500", "value": 98}, {"type": "mrr_at_700", "value": 98}, {"type": "mrr_at_1000", "value": 98}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "ndcg_at_1", "value": 43.878}, {"type": "ndcg_at_2", "value": 37.956}, {"type": "ndcg_at_3", "value": 35.053}, {"type": "ndcg_at_5", "value": 32.59}, {"type": "ndcg_at_7", "value": 30.226}, {"type": "ndcg_at_10", "value": 29.005}, {"type": "ndcg_at_20", "value": 30.11}, {"type": "ndcg_at_30", "value": 32.019999999999996}, {"type": "ndcg_at_50", "value": 34.354}, {"type": "ndcg_at_70", "value": 36.665}, {"type": "ndcg_at_100", "value": 38.888}, {"type": "ndcg_at_200", "value": 43.435}, {"type": "ndcg_at_300", "value": 45.795}, {"type": "ndcg_at_500", "value": 48.699999999999996}, {"type": "ndcg_at_700", "value": 50.242}, {"type": "ndcg_at_1000", "value": 51.529}, {"type": "map_at_1", "value": 3.521}, {"type": "map_at_2", "value": 5.309}, {"type": "map_at_3", "value": 6.576}, {"type": "map_at_5", "value": 8.97}, {"type": "map_at_7", "value": 10.194}, {"type": "map_at_10", "value": 11.949}, {"type": "map_at_20", "value": 14.686}, {"type": "map_at_30", "value": 15.8}, {"type": "map_at_50", "value": 16.59}, {"type": "map_at_70", "value": 17.2}, {"type": "map_at_100", "value": 17.765}, {"type": "map_at_200", "value": 18.636}, {"type": "map_at_300", "value": 18.972}, {"type": "map_at_500", "value": 19.301}, {"type": "map_at_700", "value": 19.445}, {"type": "map_at_1000", "value": 19.546}, {"type": "recall_at_1", "value": 3.521}, {"type": "recall_at_2", "value": 5.848}, {"type": "recall_at_3", "value": 7.657}, {"type": "recall_at_5", "value": 11.368}, {"type": "recall_at_7", "value": 13.748}, {"type": "recall_at_10", "value": 18.061}, {"type": "recall_at_20", "value": 26.844}, {"type": "recall_at_30", "value": 31.186000000000003}, {"type": "recall_at_50", "value": 35.951}, {"type": "recall_at_70", "value": 40.961999999999996}, {"type": "recall_at_100", "value": 46.743}, {"type": "recall_at_200", "value": 58.483}, {"type": "recall_at_300", "value": 65.973}, {"type": "recall_at_500", "value": 75.233}, {"type": "recall_at_700", "value": 80.472}, {"type": "recall_at_1000", "value": 85.02}, {"type": "precision_at_1", "value": 46.939}, {"type": "precision_at_2", "value": 38.775999999999996}, {"type": "precision_at_3", "value": 34.694}, {"type": "precision_at_5", "value": 31.429000000000002}, {"type": "precision_at_7", "value": 27.697}, {"type": "precision_at_10", "value": 24.490000000000002}, {"type": "precision_at_20", "value": 18.776}, {"type": "precision_at_30", "value": 15.034}, {"type": "precision_at_50", "value": 10.857}, {"type": "precision_at_70", "value": 9.096}, {"type": "precision_at_100", "value": 7.51}, {"type": "precision_at_200", "value": 4.929}, {"type": "precision_at_300", "value": 3.7760000000000002}, {"type": "precision_at_500", "value": 2.6780000000000004}, {"type": "precision_at_700", "value": 2.085}, {"type": "precision_at_1000", "value": 1.5709999999999997}, {"type": "mrr_at_1", "value": 46.939}, {"type": "mrr_at_2", "value": 55.102}, {"type": "mrr_at_3", "value": 57.823}, {"type": "mrr_at_5", "value": 60.68}, {"type": "mrr_at_7", "value": 60.972}, {"type": "mrr_at_10", "value": 61.199000000000005}, {"type": "mrr_at_20", "value": 61.831}, {"type": "mrr_at_30", "value": 61.831}, {"type": "mrr_at_50", "value": 61.873}, {"type": "mrr_at_70", "value": 61.873}, {"type": "mrr_at_100", "value": 61.873}, {"type": "mrr_at_200", "value": 61.873}, {"type": "mrr_at_300", "value": 61.873}, {"type": "mrr_at_500", "value": 61.873}, {"type": "mrr_at_700", "value": 61.873}, {"type": "mrr_at_1000", "value": 61.873}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 69.3294}, {"type": "ap", "value": 14.561333393364736}, {"type": "f1", "value": 53.992309820496466}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 63.63893604980192}, {"type": "f1", "value": 63.92959380489434}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 56.270879258659775}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.71073493473207}, {"type": "cos_sim_ap", "value": 81.52392540284202}, {"type": "cos_sim_f1", "value": 74.71162377994676}, {"type": "cos_sim_precision", "value": 71.89558428885094}, {"type": "cos_sim_recall", "value": 77.75725593667546}, {"type": "dot_accuracy", "value": 88.71073493473207}, {"type": "dot_ap", "value": 81.52394754041109}, {"type": "dot_f1", "value": 74.71162377994676}, {"type": "dot_precision", "value": 71.89558428885094}, {"type": "dot_recall", "value": 77.75725593667546}, {"type": "euclidean_accuracy", "value": 88.71073493473207}, {"type": "euclidean_ap", "value": 81.52392035435321}, {"type": "euclidean_f1", "value": 74.71162377994676}, {"type": "euclidean_precision", "value": 71.89558428885094}, {"type": "euclidean_recall", "value": 77.75725593667546}, {"type": "manhattan_accuracy", "value": 88.47231328604637}, {"type": "manhattan_ap", "value": 81.22907439267321}, {"type": "manhattan_f1", "value": 74.3351571446749}, {"type": "manhattan_precision", "value": 71.78667977390022}, {"type": "manhattan_recall", "value": 77.0712401055409}, {"type": "max_accuracy", "value": 88.71073493473207}, {"type": "max_ap", "value": 81.52394754041109}, {"type": "max_f1", "value": 74.71162377994676}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.85136026700819}, {"type": "cos_sim_ap", "value": 87.7768002924216}, {"type": "cos_sim_f1", "value": 80.358908624794}, {"type": "cos_sim_precision", "value": 76.62918209122023}, {"type": "cos_sim_recall", "value": 84.47028025870034}, {"type": "dot_accuracy", "value": 89.85136026700819}, {"type": "dot_ap", "value": 87.77680027889778}, {"type": "dot_f1", "value": 80.358908624794}, {"type": "dot_precision", "value": 76.62918209122023}, {"type": "dot_recall", "value": 84.47028025870034}, {"type": "euclidean_accuracy", "value": 89.85136026700819}, {"type": "euclidean_ap", "value": 87.77680174697751}, {"type": "euclidean_f1", "value": 80.358908624794}, {"type": "euclidean_precision", "value": 76.62918209122023}, {"type": "euclidean_recall", "value": 84.47028025870034}, {"type": "manhattan_accuracy", "value": 89.86300306593705}, {"type": "manhattan_ap", "value": 87.78613271895861}, {"type": "manhattan_f1", "value": 80.31831016905645}, {"type": "manhattan_precision", "value": 76.68230516070304}, {"type": "manhattan_recall", "value": 84.3162919618109}, {"type": "max_accuracy", "value": 89.86300306593705}, {"type": "max_ap", "value": 87.78613271895861}, {"type": "max_f1", "value": 80.358908624794}]}]}]}
dataset
null
511
risedev/test
risedev
zero-shot-image-classification
[ "transformers", "pytorch", "safetensors", "clip", "zero-shot-image-classification", "vision", "language", "fashion", "ecommerce", "en", "license:mit", "endpoints_compatible", "region:us" ]
2024-02-23T08:44:29Z
2024-02-23T09:29:35+00:00
13
0
--- language: - en library_name: transformers license: mit tags: - vision - language - fashion - ecommerce widget: - src: https://cdn-images.farfetch-contents.com/19/76/05/56/19760556_44221665_1000.jpg candidate_labels: black shoe, red shoe, a cat example_title: Black Shoe --- [![Youtube Video](https://img.shields.io/badge/youtube-video-red)](https://www.youtube.com/watch?v=uqRSc-KSA1Y) [![HuggingFace Model](https://img.shields.io/badge/HF%20Model-Weights-yellow)](https://huggingface.co/patrickjohncyh/fashion-clip) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Z1hAxBnWjF76bEi9KQ6CMBBEmI_FVDrW?usp=sharing) [![Medium Blog Post](https://raw.githubusercontent.com/aleen42/badges/master/src/medium.svg)](https://towardsdatascience.com/teaching-clip-some-fashion-3005ac3fdcc3) [![Open in Streamlit](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://huggingface.co/spaces/vinid/fashion-clip-app) # Model Card: Fashion CLIP Disclaimer: The model card adapts the model card from [here](https://huggingface.co/openai/clip-vit-base-patch32). ## Model Details UPDATE (10/03/23): We have updated the model! We found that [laion/CLIP-ViT-B-32-laion2B-s34B-b79K](https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K) checkpoint (thanks [Bin](https://www.linkedin.com/in/bin-duan-56205310/)!) worked better than original OpenAI CLIP on Fashion. We thus fine-tune a newer (and better!) version of FashionCLIP (henceforth FashionCLIP 2.0), while keeping the architecture the same. We postulate that the perofrmance gains afforded by `laion/CLIP-ViT-B-32-laion2B-s34B-b79K` are due to the increased training data (5x OpenAI CLIP data). Our [thesis](https://www.nature.com/articles/s41598-022-23052-9), however, remains the same -- fine-tuning `laion/CLIP` on our fashion dataset improved zero-shot perofrmance across our benchmarks. See the below table comparing weighted macro F1 score across models. | Model | FMNIST | KAGL | DEEP | | ------------- | ------------- | ------------- | ------------- | | OpenAI CLIP | 0.66 | 0.63 | 0.45 | | FashionCLIP | 0.74 | 0.67 | 0.48 | | Laion CLIP | 0.78 | 0.71 | 0.58 | | FashionCLIP 2.0 | __0.83__ | __0.73__ | __0.62__ | --- FashionCLIP is a CLIP-based model developed to produce general product representations for fashion concepts. Leveraging the pre-trained checkpoint (ViT-B/32) released by [OpenAI](https://github.com/openai/CLIP), we train FashionCLIP on a large, high-quality novel fashion dataset to study whether domain specific fine-tuning of CLIP-like models is sufficient to produce product representations that are zero-shot transferable to entirely new datasets and tasks. FashionCLIP was not developed for model deplyoment - to do so, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. ### Model Date March 2023 ### Model Type The model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained, starting from a pre-trained checkpoint, to maximize the similarity of (image, text) pairs via a contrastive loss on a fashion dataset containing 800K products. ### Documents - [FashionCLIP Github Repo](https://github.com/patrickjohncyh/fashion-clip) - [FashionCLIP Paper](https://www.nature.com/articles/s41598-022-23052-9) ## Data The model was trained on (image, text) pairs obtained from the Farfecth dataset[^1 Awaiting official release.], an English dataset comprising over 800K fashion products, with more than 3K brands across dozens of object types. The image used for encoding is the standard product image, which is a picture of the item over a white background, with no humans. The text used is a concatenation of the _highlight_ (e.g., “stripes”, “long sleeves”, “Armani”) and _short description_ (“80s styled t-shirt”)) available in the Farfetch dataset. ## Limitations, Bias and Fiarness We acknowledge certain limitations of FashionCLIP and expect that it inherits certain limitations and biases present in the original CLIP model. We do not expect our fine-tuning to significantly augment these limitations: we acknowledge that the fashion data we use makes explicit assumptions about the notion of gender as in "blue shoes for a woman" that inevitably associate aspects of clothing with specific people. Our investigations also suggest that the data used introduces certain limitations in FashionCLIP. From the textual modality, given that most captions derived from the Farfetch dataset are long, we observe that FashionCLIP may be more performant in longer queries than shorter ones. From the image modality, FashionCLIP is also biased towards standard product images (centered, white background). Model selection, i.e. selecting an appropariate stopping critera during fine-tuning, remains an open challenge. We observed that using loss on an in-domain (i.e. same distribution as test) validation dataset is a poor selection critera when out-of-domain generalization (i.e. across different datasets) is desired, even when the dataset used is relatively diverse and large. ## Citation ``` @Article{Chia2022, title="Contrastive language and vision learning of general fashion concepts", author="Chia, Patrick John and Attanasio, Giuseppe and Bianchi, Federico and Terragni, Silvia and Magalh{\~a}es, Ana Rita and Goncalves, Diogo and Greco, Ciro and Tagliabue, Jacopo", journal="Scientific Reports", year="2022", month="Nov", day="08", volume="12", number="1", abstract="The steady rise of online shopping goes hand in hand with the development of increasingly complex ML and NLP models. While most use cases are cast as specialized supervised learning problems, we argue that practitioners would greatly benefit from general and transferable representations of products. In this work, we build on recent developments in contrastive learning to train FashionCLIP, a CLIP-like model adapted for the fashion industry. We demonstrate the effectiveness of the representations learned by FashionCLIP with extensive tests across a variety of tasks, datasets and generalization probes. We argue that adaptations of large pre-trained models such as CLIP offer new perspectives in terms of scalability and sustainability for certain types of players in the industry. Finally, we detail the costs and environmental impact of training, and release the model weights and code as open source contribution to the community.", issn="2045-2322", doi="10.1038/s41598-022-23052-9", url="https://doi.org/10.1038/s41598-022-23052-9" } ```
[ "CHIA" ]
Non_BioNLP
[![Youtube Video](https://img.shields.io/badge/youtube-video-red)](https://www.youtube.com/watch?v=uqRSc-KSA1Y) [![HuggingFace Model](https://img.shields.io/badge/HF%20Model-Weights-yellow)](https://huggingface.co/patrickjohncyh/fashion-clip) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Z1hAxBnWjF76bEi9KQ6CMBBEmI_FVDrW?usp=sharing) [![Medium Blog Post](https://raw.githubusercontent.com/aleen42/badges/master/src/medium.svg)](https://towardsdatascience.com/teaching-clip-some-fashion-3005ac3fdcc3) [![Open in Streamlit](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://huggingface.co/spaces/vinid/fashion-clip-app) # Model Card: Fashion CLIP Disclaimer: The model card adapts the model card from [here](https://huggingface.co/openai/clip-vit-base-patch32). ## Model Details UPDATE (10/03/23): We have updated the model! We found that [laion/CLIP-ViT-B-32-laion2B-s34B-b79K](https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K) checkpoint (thanks [Bin](https://www.linkedin.com/in/bin-duan-56205310/)!) worked better than original OpenAI CLIP on Fashion. We thus fine-tune a newer (and better!) version of FashionCLIP (henceforth FashionCLIP 2.0), while keeping the architecture the same. We postulate that the perofrmance gains afforded by `laion/CLIP-ViT-B-32-laion2B-s34B-b79K` are due to the increased training data (5x OpenAI CLIP data). Our [thesis](https://www.nature.com/articles/s41598-022-23052-9), however, remains the same -- fine-tuning `laion/CLIP` on our fashion dataset improved zero-shot perofrmance across our benchmarks. See the below table comparing weighted macro F1 score across models. | Model | FMNIST | KAGL | DEEP | | ------------- | ------------- | ------------- | ------------- | | OpenAI CLIP | 0.66 | 0.63 | 0.45 | | FashionCLIP | 0.74 | 0.67 | 0.48 | | Laion CLIP | 0.78 | 0.71 | 0.58 | | FashionCLIP 2.0 | __0.83__ | __0.73__ | __0.62__ | --- FashionCLIP is a CLIP-based model developed to produce general product representations for fashion concepts. Leveraging the pre-trained checkpoint (ViT-B/32) released by [OpenAI](https://github.com/openai/CLIP), we train FashionCLIP on a large, high-quality novel fashion dataset to study whether domain specific fine-tuning of CLIP-like models is sufficient to produce product representations that are zero-shot transferable to entirely new datasets and tasks. FashionCLIP was not developed for model deplyoment - to do so, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. ### Model Date March 2023 ### Model Type The model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained, starting from a pre-trained checkpoint, to maximize the similarity of (image, text) pairs via a contrastive loss on a fashion dataset containing 800K products. ### Documents - [FashionCLIP Github Repo](https://github.com/patrickjohncyh/fashion-clip) - [FashionCLIP Paper](https://www.nature.com/articles/s41598-022-23052-9) ## Data The model was trained on (image, text) pairs obtained from the Farfecth dataset[^1 Awaiting official release.], an English dataset comprising over 800K fashion products, with more than 3K brands across dozens of object types. The image used for encoding is the standard product image, which is a picture of the item over a white background, with no humans. The text used is a concatenation of the _highlight_ (e.g., “stripes”, “long sleeves”, “Armani”) and _short description_ (“80s styled t-shirt”)) available in the Farfetch dataset. ## Limitations, Bias and Fiarness We acknowledge certain limitations of FashionCLIP and expect that it inherits certain limitations and biases present in the original CLIP model. We do not expect our fine-tuning to significantly augment these limitations: we acknowledge that the fashion data we use makes explicit assumptions about the notion of gender as in "blue shoes for a woman" that inevitably associate aspects of clothing with specific people. Our investigations also suggest that the data used introduces certain limitations in FashionCLIP. From the textual modality, given that most captions derived from the Farfetch dataset are long, we observe that FashionCLIP may be more performant in longer queries than shorter ones. From the image modality, FashionCLIP is also biased towards standard product images (centered, white background). Model selection, i.e. selecting an appropariate stopping critera during fine-tuning, remains an open challenge. We observed that using loss on an in-domain (i.e. same distribution as test) validation dataset is a poor selection critera when out-of-domain generalization (i.e. across different datasets) is desired, even when the dataset used is relatively diverse and large. ## Citation ``` @Article{Chia2022, title="Contrastive language and vision learning of general fashion concepts", author="Chia, Patrick John and Attanasio, Giuseppe and Bianchi, Federico and Terragni, Silvia and Magalh{\~a}es, Ana Rita and Goncalves, Diogo and Greco, Ciro and Tagliabue, Jacopo", journal="Scientific Reports", year="2022", month="Nov", day="08", volume="12", number="1", abstract="The steady rise of online shopping goes hand in hand with the development of increasingly complex ML and NLP models. While most use cases are cast as specialized supervised learning problems, we argue that practitioners would greatly benefit from general and transferable representations of products. In this work, we build on recent developments in contrastive learning to train FashionCLIP, a CLIP-like model adapted for the fashion industry. We demonstrate the effectiveness of the representations learned by FashionCLIP with extensive tests across a variety of tasks, datasets and generalization probes. We argue that adaptations of large pre-trained models such as CLIP offer new perspectives in terms of scalability and sustainability for certain types of players in the industry. Finally, we detail the costs and environmental impact of training, and release the model weights and code as open source contribution to the community.", issn="2045-2322", doi="10.1038/s41598-022-23052-9", url="https://doi.org/10.1038/s41598-022-23052-9" } ```
{"language": ["en"], "library_name": "transformers", "license": "mit", "tags": ["vision", "language", "fashion", "ecommerce"], "widget": [{"src": "https://cdn-images.farfetch-contents.com/19/76/05/56/19760556_44221665_1000.jpg", "candidate_labels": "black shoe, red shoe, a cat", "example_title": "Black Shoe"}]}
dataset
null
512
macadeliccc/magistrate-3.2-3b-it-GGUF
macadeliccc
text-generation
[ "transformers", "gguf", "spectrum", "llama-3", "axolotl", "legal", "HFforLegal", "autoquant", "text-generation", "en", "dataset:teknium/OpenHermes-2.5", "dataset:NousResearch/hermes-function-calling-v1", "dataset:arcee-ai/The-Tome", "dataset:cognitivecomputations/SystemChat-2.0", "arxiv:2408.10914", "base_model:macadeliccc/magistrate-3.2-3b-base", "base_model:quantized:macadeliccc/magistrate-3.2-3b-base", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
2024-10-01T19:58:51Z
2024-10-01T20:33:37+00:00
831
1
--- base_model: macadeliccc/magistrate-3.2-3b-base datasets: - teknium/OpenHermes-2.5 - NousResearch/hermes-function-calling-v1 - arcee-ai/The-Tome - cognitivecomputations/SystemChat-2.0 language: - en library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - spectrum - llama-3 - axolotl - legal - HFforLegal - autoquant - gguf --- # magistrate-3.2-3b-it This model is a fine-tuned version of [macadeliccc/magistrate-3.2-3b-base](https://huggingface.co/macadeliccc/magistrate-3.2-3b-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8067 <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: macadeliccc/magistrate-3.2-3b-base model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: json type: sharegpt conversation: chatml data_files: train/hermes-2.5.jsonl # - path: json # type: sharegpt # conversation: chatml # data_files: train/financial_instructions_cleaned_2.json - path: json type: sharegpt conversation: chatml data_files: train/glaive-function-calling-5k.json - path: json type: sharegpt conversation: chatml data_files: train/func-calling-singleturn.json - path: json type: sharegpt conversation: chatml data_files: train/func-calling.json - path: json type: sharegpt conversation: chatml data_files: train/json-mode-agentic.json - path: json type: sharegpt conversation: chatml data_files: train/json-mode-singleturn.json - path: json type: sharegpt conversation: chatml data_files: train/reasoning_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/systemchat_2_0_small.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/303_creative_llc_v__elenis_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/abitron_austria_gmbh_v__hetronic_international__inc__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/acheson_hotels__llc_v__laufer_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/alexander_v__sc_conference_of_naacp_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/amgen_inc__v__sanofi_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/andy_warhol_found___inc__v__goldsmith_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/arizona_v__navajo_nation_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/becerra__sec__of_h_hs_v__san_carlos_apache_tribe_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/biden_v__nebraska_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/bissonnette_v__lepage_bakeries_park_st___llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/bittner_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/brown_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/cantero_v__bank_of_america__n_a__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/cfpb_v__com__fin__services_assn__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/chiaverini_v__city_of_napoleon_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/ciminelli_v__united_state_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/city_of_grants_pass_v__johnson_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/coinbase__inc__v__bielski_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/coinbase__inc__v__suski_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/connelly_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/corner_post__inc__v__bd__of_governors__frs_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/counterman_v__colorado_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/cruz_v__arizona_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/culley_v__marshall_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dept__of_agric__rural_dev__v__kirtz_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dept__of_education_v__brown_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dept__of_state_v__munoz_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/devillier_v__texas_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/diaz_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dubin_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dupree_v__younger_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/erlinger_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/fbi_v__fikre_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/fda_v__alliance_hippocratic_medicine_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/financial_oversight_board_v__cpi_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/fischer_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/garland__att_y_gen__v__cargill_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/glacier_northwest__inc__v__int_l_brotherhood_of_teamsters_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/gonzalez_v__google_llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/gonzalez_v__trevino_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/great_lakes_insurance_se_v__raiders_retreat_realty_co___llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/groff_v__dejoy_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/harrington_v__purdue_pharma_l_p__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/harrow_v__dept__of_defense_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/health_and_hospital_corp__v__talevski_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/helix_energy_solutions_v__hewitt_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/in_re_grand_jury_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/jack_daniel_s_properties__inc__v__vip_products_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/jones_v__hendrix_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/karcho_polselli_v__irs_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/lac_du_flambeau_band_v__coughlin_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/lindke_v__freed_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/loper_bright_enterprises__inc__v__raimondo__sec__of_comm__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/lora_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/macquarie_infrastructure_corp__v__moab_partners__l_p__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/mallory_v__norfolk_southern_railway_co__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/mcintosh_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/merrill_v__milligan_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/moore_v__harper_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/moore_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/moyle_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/muldrow_v__st__louis_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/murray_v__ubs_securities__llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/murthy__surgeon_gen__v__missouri_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/netchoice__llc_v__paxton_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/new_york_v__new_jersey_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/nra_v__vullo_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/o_connor_ratcliff_v__garnier_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/oh_adjutant_gen__s_dept__v__flra_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/ohio_v__epa_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/perez_v__sturgis_public_schools_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/pugin_v__garland_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/pulsifer_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/relentless__inc__v__dept__of_commerce_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/rudisill_v__mcdonough__sec__of_va_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/sackett_v__epa_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/samia_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/santos_zacaria_v__garland__att_y_gen__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/sec_v__cochran_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/sec_v__jarkesy_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/sheetz_v__county_of_el_dorado_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/slack_technologies__llc_v__pirani_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/smith_v__arizona_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/smith_v__spizzirri_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/smith_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/snyder_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/starbucks_corp__v__mckinney_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/students_for_fair_admissions_v__university_of_nc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/texas_v__new_mexico_and_colorado_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/thornell_v__jones_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/truck_insurance_exchange_v__kaiser_gypsum_co__inc__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/trump_v__anderson_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/turkiye_halk_bankasi_a_s__v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/twitter__inc__v__taamneh_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/tyler_v__hennepin_county_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/u_s___ex_rel__polansky_v__executive_health_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/u_s___ex_rel__schutte_v__supervalu_inc__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/united_states_trustee_v__john_q__hammons_fall_2006__llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/united_states_v__hansen_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/united_states_v__rahimi_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/united_states_v__texas_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/vidal__under_sec__of_comm__v__elster_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/warner_chappell_music__inc__v__nealy_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/wilkins_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/wilkinson_v__garland__att_y_gen__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/yegiazaryan_v__smagin_sharegpt.json chat_template: chatml unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ # input_layernorm layers - model.layers.0.input_layernorm - model.layers.1.input_layernorm - model.layers.2.input_layernorm - model.layers.3.input_layernorm - model.layers.4.input_layernorm - model.layers.5.input_layernorm - model.layers.6.input_layernorm - model.layers.7.input_layernorm - model.layers.8.input_layernorm - model.layers.9.input_layernorm - model.layers.10.input_layernorm - model.layers.11.input_layernorm - model.layers.12.input_layernorm - model.layers.13.input_layernorm # mlp.down_proj layers - model.layers.0.mlp.down_proj - model.layers.1.mlp.down_proj - model.layers.17.mlp.down_proj - model.layers.19.mlp.down_proj - model.layers.18.mlp.down_proj - model.layers.5.mlp.down_proj - model.layers.20.mlp.down_proj - model.layers.2.mlp.down_proj - model.layers.4.mlp.down_proj - model.layers.6.mlp.down_proj - model.layers.3.mlp.down_proj - model.layers.16.mlp.down_proj - model.layers.15.mlp.down_proj - model.layers.13.mlp.down_proj # mlp.gate_proj layers - model.layers.0.mlp.gate_proj - model.layers.1.mlp.gate_proj - model.layers.2.mlp.gate_proj - model.layers.3.mlp.gate_proj - model.layers.22.mlp.gate_proj - model.layers.21.mlp.gate_proj - model.layers.20.mlp.gate_proj - model.layers.23.mlp.gate_proj - model.layers.19.mlp.gate_proj - model.layers.4.mlp.gate_proj - model.layers.18.mlp.gate_proj - model.layers.17.mlp.gate_proj - model.layers.5.mlp.gate_proj - model.layers.24.mlp.gate_proj # mlp.up_proj layers - model.layers.4.mlp.up_proj - model.layers.3.mlp.up_proj - model.layers.5.mlp.up_proj - model.layers.6.mlp.up_proj - model.layers.7.mlp.up_proj - model.layers.2.mlp.up_proj - model.layers.8.mlp.up_proj - model.layers.14.mlp.up_proj - model.layers.13.mlp.up_proj - model.layers.11.mlp.up_proj - model.layers.9.mlp.up_proj - model.layers.1.mlp.up_proj - model.layers.15.mlp.up_proj - model.layers.12.mlp.up_proj # post_attention_layernorm layers - model.layers.0.post_attention_layernorm - model.layers.1.post_attention_layernorm - model.layers.2.post_attention_layernorm - model.layers.3.post_attention_layernorm - model.layers.4.post_attention_layernorm - model.layers.5.post_attention_layernorm - model.layers.6.post_attention_layernorm - model.layers.7.post_attention_layernorm - model.layers.8.post_attention_layernorm - model.layers.9.post_attention_layernorm - model.layers.10.post_attention_layernorm - model.layers.11.post_attention_layernorm - model.layers.12.post_attention_layernorm - model.layers.13.post_attention_layernorm # self_attn.k_proj layers - model.layers.25.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.19.self_attn.k_proj - model.layers.20.self_attn.k_proj - model.layers.17.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.18.self_attn.k_proj - model.layers.21.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.15.self_attn.k_proj - model.layers.10.self_attn.k_proj - model.layers.6.self_attn.k_proj - model.layers.5.self_attn.k_proj # self_attn.o_proj layers - model.layers.13.self_attn.o_proj - model.layers.7.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.5.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.6.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.8.self_attn.o_proj - model.layers.20.self_attn.o_proj - model.layers.22.self_attn.o_proj - model.layers.9.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.11.self_attn.o_proj # self_attn.q_proj layers - model.layers.12.self_attn.q_proj - model.layers.13.self_attn.q_proj - model.layers.9.self_attn.q_proj - model.layers.8.self_attn.q_proj - model.layers.10.self_attn.q_proj - model.layers.14.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.26.self_attn.q_proj - model.layers.6.self_attn.q_proj - model.layers.7.self_attn.q_proj - model.layers.16.self_attn.q_proj - model.layers.5.self_attn.q_proj - model.layers.25.self_attn.q_proj # model.norm layers # self_attn.v_proj layers - model.layers.23.self_attn.v_proj - model.layers.14.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.19.self_attn.v_proj - model.layers.3.self_attn.v_proj - model.layers.18.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.4.self_attn.v_proj - model.layers.17.self_attn.v_proj - model.layers.22.self_attn.v_proj - model.layers.20.self_attn.v_proj - model.layers.13.self_attn.v_proj - model.layers.6.self_attn.v_proj - model.layers.27.self_attn.v_proj val_set_size: 0.05 output_dir: ./outputs/magistrate-3.2-3b sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true adapter: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 3 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 2e-4 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: warmup_steps: 1000 evals_per_epoch: 2 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: deepspeed_configs/zero3.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> ## Model description Magistrate-3.2-3b-it is a legal assistant specializing in US Supreme Court case law and US Federal regulations. The base model is pretrained with ~250M tokens containing no synthetic legal data. The instruct model does contain synthetic data. ## Intended uses & limitations This model is for research purposes and for continued development of the legal specialty. You are liable for all model outputs. ## Training and evaluation data This model was trained on a variety of standard open source datasets like OpenHermes-2.5, hermes-function-calling, and some select entries from the Tome. Additionally, I have included a comprehensive, non-synthetic argument dataset. This is a work in progress but has shown promising results so far. ## Training procedure Spectrum top 35% finetune for both pretrain and SFT. Thanks to the cognitive computations team for the work done with spectrum. + Pretraining methodology based on Cohere's paper: [To Code, or Not To Code? Exploring Impact of Code in Pre-training](https://arxiv.org/abs/2408.10914) + Instruct finetune largely based on OpenHermes-2.5 and hermes-function-calling ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3754 | 0.0005 | 1 | 1.7429 | | 1.0 | 0.5002 | 1017 | 0.8864 | | 0.9482 | 1.0005 | 2034 | 0.8395 | | 0.6817 | 1.4987 | 3051 | 0.8063 | | 0.697 | 1.9991 | 4068 | 0.7580 | | 0.3769 | 2.4966 | 5085 | 0.8140 | | 0.4278 | 2.9965 | 6102 | 0.8067 | ### Framework versions - Transformers 4.45.0 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.20.0
[ "CPI" ]
Non_BioNLP
# magistrate-3.2-3b-it This model is a fine-tuned version of [macadeliccc/magistrate-3.2-3b-base](https://huggingface.co/macadeliccc/magistrate-3.2-3b-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8067 <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: macadeliccc/magistrate-3.2-3b-base model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: json type: sharegpt conversation: chatml data_files: train/hermes-2.5.jsonl # - path: json # type: sharegpt # conversation: chatml # data_files: train/financial_instructions_cleaned_2.json - path: json type: sharegpt conversation: chatml data_files: train/glaive-function-calling-5k.json - path: json type: sharegpt conversation: chatml data_files: train/func-calling-singleturn.json - path: json type: sharegpt conversation: chatml data_files: train/func-calling.json - path: json type: sharegpt conversation: chatml data_files: train/json-mode-agentic.json - path: json type: sharegpt conversation: chatml data_files: train/json-mode-singleturn.json - path: json type: sharegpt conversation: chatml data_files: train/reasoning_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/systemchat_2_0_small.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/303_creative_llc_v__elenis_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/abitron_austria_gmbh_v__hetronic_international__inc__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/acheson_hotels__llc_v__laufer_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/alexander_v__sc_conference_of_naacp_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/amgen_inc__v__sanofi_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/andy_warhol_found___inc__v__goldsmith_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/arizona_v__navajo_nation_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/becerra__sec__of_h_hs_v__san_carlos_apache_tribe_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/biden_v__nebraska_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/bissonnette_v__lepage_bakeries_park_st___llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/bittner_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/brown_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/cantero_v__bank_of_america__n_a__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/cfpb_v__com__fin__services_assn__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/chiaverini_v__city_of_napoleon_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/ciminelli_v__united_state_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/city_of_grants_pass_v__johnson_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/coinbase__inc__v__bielski_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/coinbase__inc__v__suski_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/connelly_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/corner_post__inc__v__bd__of_governors__frs_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/counterman_v__colorado_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/cruz_v__arizona_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/culley_v__marshall_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dept__of_agric__rural_dev__v__kirtz_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dept__of_education_v__brown_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dept__of_state_v__munoz_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/devillier_v__texas_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/diaz_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dubin_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/dupree_v__younger_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/erlinger_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/fbi_v__fikre_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/fda_v__alliance_hippocratic_medicine_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/financial_oversight_board_v__cpi_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/fischer_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/garland__att_y_gen__v__cargill_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/glacier_northwest__inc__v__int_l_brotherhood_of_teamsters_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/gonzalez_v__google_llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/gonzalez_v__trevino_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/great_lakes_insurance_se_v__raiders_retreat_realty_co___llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/groff_v__dejoy_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/harrington_v__purdue_pharma_l_p__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/harrow_v__dept__of_defense_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/health_and_hospital_corp__v__talevski_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/helix_energy_solutions_v__hewitt_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/in_re_grand_jury_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/jack_daniel_s_properties__inc__v__vip_products_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/jones_v__hendrix_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/karcho_polselli_v__irs_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/lac_du_flambeau_band_v__coughlin_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/lindke_v__freed_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/loper_bright_enterprises__inc__v__raimondo__sec__of_comm__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/lora_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/macquarie_infrastructure_corp__v__moab_partners__l_p__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/mallory_v__norfolk_southern_railway_co__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/mcintosh_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/merrill_v__milligan_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/moore_v__harper_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/moore_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/moyle_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/muldrow_v__st__louis_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/murray_v__ubs_securities__llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/murthy__surgeon_gen__v__missouri_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/netchoice__llc_v__paxton_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/new_york_v__new_jersey_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/nra_v__vullo_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/o_connor_ratcliff_v__garnier_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/oh_adjutant_gen__s_dept__v__flra_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/ohio_v__epa_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/perez_v__sturgis_public_schools_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/pugin_v__garland_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/pulsifer_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/relentless__inc__v__dept__of_commerce_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/rudisill_v__mcdonough__sec__of_va_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/sackett_v__epa_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/samia_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/santos_zacaria_v__garland__att_y_gen__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/sec_v__cochran_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/sec_v__jarkesy_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/sheetz_v__county_of_el_dorado_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/slack_technologies__llc_v__pirani_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/smith_v__arizona_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/smith_v__spizzirri_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/smith_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/snyder_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/starbucks_corp__v__mckinney_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/students_for_fair_admissions_v__university_of_nc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/texas_v__new_mexico_and_colorado_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/thornell_v__jones_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/truck_insurance_exchange_v__kaiser_gypsum_co__inc__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/trump_v__anderson_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/turkiye_halk_bankasi_a_s__v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/twitter__inc__v__taamneh_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/tyler_v__hennepin_county_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/u_s___ex_rel__polansky_v__executive_health_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/u_s___ex_rel__schutte_v__supervalu_inc__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/united_states_trustee_v__john_q__hammons_fall_2006__llc_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/united_states_v__hansen_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/united_states_v__rahimi_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/united_states_v__texas_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/vidal__under_sec__of_comm__v__elster_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/warner_chappell_music__inc__v__nealy_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/wilkins_v__united_states_sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/wilkinson_v__garland__att_y_gen__sharegpt.json - path: json type: sharegpt conversation: chatml data_files: train/argument_dataset/yegiazaryan_v__smagin_sharegpt.json chat_template: chatml unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ # input_layernorm layers - model.layers.0.input_layernorm - model.layers.1.input_layernorm - model.layers.2.input_layernorm - model.layers.3.input_layernorm - model.layers.4.input_layernorm - model.layers.5.input_layernorm - model.layers.6.input_layernorm - model.layers.7.input_layernorm - model.layers.8.input_layernorm - model.layers.9.input_layernorm - model.layers.10.input_layernorm - model.layers.11.input_layernorm - model.layers.12.input_layernorm - model.layers.13.input_layernorm # mlp.down_proj layers - model.layers.0.mlp.down_proj - model.layers.1.mlp.down_proj - model.layers.17.mlp.down_proj - model.layers.19.mlp.down_proj - model.layers.18.mlp.down_proj - model.layers.5.mlp.down_proj - model.layers.20.mlp.down_proj - model.layers.2.mlp.down_proj - model.layers.4.mlp.down_proj - model.layers.6.mlp.down_proj - model.layers.3.mlp.down_proj - model.layers.16.mlp.down_proj - model.layers.15.mlp.down_proj - model.layers.13.mlp.down_proj # mlp.gate_proj layers - model.layers.0.mlp.gate_proj - model.layers.1.mlp.gate_proj - model.layers.2.mlp.gate_proj - model.layers.3.mlp.gate_proj - model.layers.22.mlp.gate_proj - model.layers.21.mlp.gate_proj - model.layers.20.mlp.gate_proj - model.layers.23.mlp.gate_proj - model.layers.19.mlp.gate_proj - model.layers.4.mlp.gate_proj - model.layers.18.mlp.gate_proj - model.layers.17.mlp.gate_proj - model.layers.5.mlp.gate_proj - model.layers.24.mlp.gate_proj # mlp.up_proj layers - model.layers.4.mlp.up_proj - model.layers.3.mlp.up_proj - model.layers.5.mlp.up_proj - model.layers.6.mlp.up_proj - model.layers.7.mlp.up_proj - model.layers.2.mlp.up_proj - model.layers.8.mlp.up_proj - model.layers.14.mlp.up_proj - model.layers.13.mlp.up_proj - model.layers.11.mlp.up_proj - model.layers.9.mlp.up_proj - model.layers.1.mlp.up_proj - model.layers.15.mlp.up_proj - model.layers.12.mlp.up_proj # post_attention_layernorm layers - model.layers.0.post_attention_layernorm - model.layers.1.post_attention_layernorm - model.layers.2.post_attention_layernorm - model.layers.3.post_attention_layernorm - model.layers.4.post_attention_layernorm - model.layers.5.post_attention_layernorm - model.layers.6.post_attention_layernorm - model.layers.7.post_attention_layernorm - model.layers.8.post_attention_layernorm - model.layers.9.post_attention_layernorm - model.layers.10.post_attention_layernorm - model.layers.11.post_attention_layernorm - model.layers.12.post_attention_layernorm - model.layers.13.post_attention_layernorm # self_attn.k_proj layers - model.layers.25.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.19.self_attn.k_proj - model.layers.20.self_attn.k_proj - model.layers.17.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.18.self_attn.k_proj - model.layers.21.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.15.self_attn.k_proj - model.layers.10.self_attn.k_proj - model.layers.6.self_attn.k_proj - model.layers.5.self_attn.k_proj # self_attn.o_proj layers - model.layers.13.self_attn.o_proj - model.layers.7.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.5.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.6.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.8.self_attn.o_proj - model.layers.20.self_attn.o_proj - model.layers.22.self_attn.o_proj - model.layers.9.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.11.self_attn.o_proj # self_attn.q_proj layers - model.layers.12.self_attn.q_proj - model.layers.13.self_attn.q_proj - model.layers.9.self_attn.q_proj - model.layers.8.self_attn.q_proj - model.layers.10.self_attn.q_proj - model.layers.14.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.26.self_attn.q_proj - model.layers.6.self_attn.q_proj - model.layers.7.self_attn.q_proj - model.layers.16.self_attn.q_proj - model.layers.5.self_attn.q_proj - model.layers.25.self_attn.q_proj # model.norm layers # self_attn.v_proj layers - model.layers.23.self_attn.v_proj - model.layers.14.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.19.self_attn.v_proj - model.layers.3.self_attn.v_proj - model.layers.18.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.4.self_attn.v_proj - model.layers.17.self_attn.v_proj - model.layers.22.self_attn.v_proj - model.layers.20.self_attn.v_proj - model.layers.13.self_attn.v_proj - model.layers.6.self_attn.v_proj - model.layers.27.self_attn.v_proj val_set_size: 0.05 output_dir: ./outputs/magistrate-3.2-3b sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true adapter: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 3 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 2e-4 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: warmup_steps: 1000 evals_per_epoch: 2 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: deepspeed_configs/zero3.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> ## Model description Magistrate-3.2-3b-it is a legal assistant specializing in US Supreme Court case law and US Federal regulations. The base model is pretrained with ~250M tokens containing no synthetic legal data. The instruct model does contain synthetic data. ## Intended uses & limitations This model is for research purposes and for continued development of the legal specialty. You are liable for all model outputs. ## Training and evaluation data This model was trained on a variety of standard open source datasets like OpenHermes-2.5, hermes-function-calling, and some select entries from the Tome. Additionally, I have included a comprehensive, non-synthetic argument dataset. This is a work in progress but has shown promising results so far. ## Training procedure Spectrum top 35% finetune for both pretrain and SFT. Thanks to the cognitive computations team for the work done with spectrum. + Pretraining methodology based on Cohere's paper: [To Code, or Not To Code? Exploring Impact of Code in Pre-training](https://arxiv.org/abs/2408.10914) + Instruct finetune largely based on OpenHermes-2.5 and hermes-function-calling ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3754 | 0.0005 | 1 | 1.7429 | | 1.0 | 0.5002 | 1017 | 0.8864 | | 0.9482 | 1.0005 | 2034 | 0.8395 | | 0.6817 | 1.4987 | 3051 | 0.8063 | | 0.697 | 1.9991 | 4068 | 0.7580 | | 0.3769 | 2.4966 | 5085 | 0.8140 | | 0.4278 | 2.9965 | 6102 | 0.8067 | ### Framework versions - Transformers 4.45.0 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.20.0
{"base_model": "macadeliccc/magistrate-3.2-3b-base", "datasets": ["teknium/OpenHermes-2.5", "NousResearch/hermes-function-calling-v1", "arcee-ai/The-Tome", "cognitivecomputations/SystemChat-2.0"], "language": ["en"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["spectrum", "llama-3", "axolotl", "legal", "HFforLegal", "autoquant", "gguf"]}
dataset
null
513
mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF
mradermacher
null
[ "transformers", "gguf", "en", "dataset:bigbio/med_qa", "dataset:qiaojin/PubMedQA", "base_model:enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled", "base_model:quantized:enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled", "endpoints_compatible", "region:us" ]
2025-03-14T18:48:01Z
2025-03-14T18:56:25+00:00
258
0
--- base_model: enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled datasets: - bigbio/med_qa - qiaojin/PubMedQA language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q3_K_S.gguf) | Q3_K_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q3_K_L.gguf) | Q3_K_L | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.IQ4_XS.gguf) | IQ4_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q5_K_S.gguf) | Q5_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q5_K_M.gguf) | Q5_K_M | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q6_K.gguf) | Q6_K | 1.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.f16.gguf) | f16 | 2.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
[ "PUBMEDQA" ]
BioNLP
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q3_K_S.gguf) | Q3_K_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q3_K_L.gguf) | Q3_K_L | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.IQ4_XS.gguf) | IQ4_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q5_K_S.gguf) | Q5_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q5_K_M.gguf) | Q5_K_M | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q6_K.gguf) | Q6_K | 1.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.f16.gguf) | f16 | 2.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"base_model": "enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled", "datasets": ["bigbio/med_qa", "qiaojin/PubMedQA"], "language": ["en"], "library_name": "transformers", "quantized_by": "mradermacher"}
dataset
null
514
QuantFactory/Llama-3-Patronus-Lynx-8B-Instruct-GGUF
QuantFactory
text-generation
[ "transformers", "gguf", "text-generation", "pytorch", "Lynx", "Patronus AI", "evaluation", "hallucination-detection", "en", "base_model:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct", "base_model:quantized:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
2024-07-12T16:33:16Z
2024-07-17T09:16:04+00:00
682
1
--- base_model: PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct language: - en library_name: transformers license: cc-by-nc-4.0 pipeline_tag: text-generation tags: - text-generation - pytorch - Lynx - Patronus AI - evaluation - hallucination-detection --- # QuantFactory/Llama-3-Patronus-Lynx-8B-Instruct-GGUF This is quantized version of [PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct](https://huggingface.co/PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct) created using llama.cpp # Model Description Lynx is an open-source hallucination evaluation model. Patronus-Lynx-8B-Instruct was trained on a mix of datasets including CovidQA, PubmedQA, DROP, RAGTruth. The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 8000 tokens. ## Model Details - **Model Type:** Patronus-Lynx-8B-Instruct is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct model. - **Language:** Primarily English - **Developed by:** Patronus AI - **License:** [https://creativecommons.org/licenses/by-nc/4.0/](https://creativecommons.org/licenses/by-nc/4.0/) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [https://github.com/patronus-ai/Lynx-hallucination-detection](https://github.com/patronus-ai/Lynx-hallucination-detection) ## How to Get Started with the Model The model is fine-tuned to be used to detect hallucinations in a RAG setting. Provided a document, question and answer, the model can evaluate whether the answer is faithful to the document. To use the model, we recommend using the prompt we used for fine-tuning: ``` PROMPT = """ Given the following QUESTION, DOCUMENT and ANSWER you must analyze the provided answer and determine whether it is faithful to the contents of the DOCUMENT. The ANSWER must not offer new information beyond the context provided in the DOCUMENT. The ANSWER also must not contradict information provided in the DOCUMENT. Output your final verdict by strictly following this format: "PASS" if the answer is faithful to the DOCUMENT and "FAIL" if the answer is not faithful to the DOCUMENT. Show your reasoning. -- QUESTION (THIS DOES NOT COUNT AS BACKGROUND INFORMATION): {question} -- DOCUMENT: {context} -- ANSWER: {answer} -- Your output should be in JSON FORMAT with the keys "REASONING" and "SCORE": {{"REASONING": <your reasoning as bullet points>, "SCORE": <your final score>}} """ ``` The model will output the score as 'PASS' if the answer is faithful to the document or FAIL if the answer is not faithful to the document. ## Training Details The model was finetuned for 3 epochs using H100s on dataset of size 2400. We use [lion](https://github.com/lucidrains/lion-pytorch) optimizer with lr=5.0e-7. For more details on data generation, please check out our Github repo. ### Training Data We train on 2400 samples consisting of CovidQA, PubmedQA, DROP and RAGTruth samples. For datasets that do not contain hallucinated samples, we generate perturbations to introduce hallucinations in the data. For more details about the data generation process, refer to the paper. ## Evaluation The model was evaluated on [PatronusAI/HaluBench](https://huggingface.co/datasets/PatronusAI/HaluBench). It outperforms GPT-3.5-Turbo, GPT-4-Turbo, GPT-4o and Claude Sonnet. ## Model Card Contact [@sunitha-ravi](https://huggingface.co/sunitha-ravi)
[ "PUBMEDQA" ]
BioNLP
# QuantFactory/Llama-3-Patronus-Lynx-8B-Instruct-GGUF This is quantized version of [PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct](https://huggingface.co/PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct) created using llama.cpp # Model Description Lynx is an open-source hallucination evaluation model. Patronus-Lynx-8B-Instruct was trained on a mix of datasets including CovidQA, PubmedQA, DROP, RAGTruth. The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 8000 tokens. ## Model Details - **Model Type:** Patronus-Lynx-8B-Instruct is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct model. - **Language:** Primarily English - **Developed by:** Patronus AI - **License:** [https://creativecommons.org/licenses/by-nc/4.0/](https://creativecommons.org/licenses/by-nc/4.0/) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [https://github.com/patronus-ai/Lynx-hallucination-detection](https://github.com/patronus-ai/Lynx-hallucination-detection) ## How to Get Started with the Model The model is fine-tuned to be used to detect hallucinations in a RAG setting. Provided a document, question and answer, the model can evaluate whether the answer is faithful to the document. To use the model, we recommend using the prompt we used for fine-tuning: ``` PROMPT = """ Given the following QUESTION, DOCUMENT and ANSWER you must analyze the provided answer and determine whether it is faithful to the contents of the DOCUMENT. The ANSWER must not offer new information beyond the context provided in the DOCUMENT. The ANSWER also must not contradict information provided in the DOCUMENT. Output your final verdict by strictly following this format: "PASS" if the answer is faithful to the DOCUMENT and "FAIL" if the answer is not faithful to the DOCUMENT. Show your reasoning. -- QUESTION (THIS DOES NOT COUNT AS BACKGROUND INFORMATION): {question} -- DOCUMENT: {context} -- ANSWER: {answer} -- Your output should be in JSON FORMAT with the keys "REASONING" and "SCORE": {{"REASONING": <your reasoning as bullet points>, "SCORE": <your final score>}} """ ``` The model will output the score as 'PASS' if the answer is faithful to the document or FAIL if the answer is not faithful to the document. ## Training Details The model was finetuned for 3 epochs using H100s on dataset of size 2400. We use [lion](https://github.com/lucidrains/lion-pytorch) optimizer with lr=5.0e-7. For more details on data generation, please check out our Github repo. ### Training Data We train on 2400 samples consisting of CovidQA, PubmedQA, DROP and RAGTruth samples. For datasets that do not contain hallucinated samples, we generate perturbations to introduce hallucinations in the data. For more details about the data generation process, refer to the paper. ## Evaluation The model was evaluated on [PatronusAI/HaluBench](https://huggingface.co/datasets/PatronusAI/HaluBench). It outperforms GPT-3.5-Turbo, GPT-4-Turbo, GPT-4o and Claude Sonnet. ## Model Card Contact [@sunitha-ravi](https://huggingface.co/sunitha-ravi)
{"base_model": "PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct", "language": ["en"], "library_name": "transformers", "license": "cc-by-nc-4.0", "pipeline_tag": "text-generation", "tags": ["text-generation", "pytorch", "Lynx", "Patronus AI", "evaluation", "hallucination-detection"]}
dataset
null
515
oongaboongahacker/phi-2
oongaboongahacker
text-generation
[ "transformers", "pytorch", "mixformer-sequential", "text-generation", "custom_code", "autotrain_compatible", "region:us" ]
2023-12-13T13:01:48Z
2023-12-13T13:24:37+00:00
22
22
--- {} --- THE MODEL IS NOT OWNED BY ME IN ANY CASE. THIS IS SOLELY THE PROPERTY OF MICROSOFT UNDER THE FOLLOWING LICENSE: MICROSOFT RESEARCH LICENSE TERMS IF YOU LIVE IN THE UNITED STATES, PLEASE READ THE “BINDING ARBITRATION AND CLASS ACTION WAIVER” SECTION BELOW. IT AFFECTS HOW DISPUTES ARE RESOLVED. These license terms are an agreement between you and Microsoft Corporation (or one of its affiliates). They apply to the source code, object code, machine learning models, or data (collectively “Materials”) that accompany this license. IF YOU COMPLY WITH THESE LICENSE TERMS, YOU HAVE THE RIGHTS BELOW. BY USING THE MATERIALS, YOU ACCEPT THESE TERMS. 1) INSTALLATION AND USE RIGHTS TO THE MATERIALS. Subject to the terms of this agreement, you have the below rights, if applicable, to use the Materials solely for non-commercial, non-revenue generating, research purposes: a) Source Code. If source code is included, you may use and modify the source code, but you may not distribute the source code. b) Object Code. If object code is included, you may use the object code, but you may not distribute the object code. c) Models. If machine learning model(s) are included, you may use the model(s), but you may not distribute the models. d) Data. If data is included, you may use and modify the data, but your use and modification must be consistent with the consent under which the data was provided and/or gathered and you may not distribute the data or your modifications to the data. 2) SCOPE OF LICENSE. The Materials are licensed, not sold. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you will not (and have no right to): a) work around any technical limitations in the Materials that only allow you to use it in certain ways; b) reverse engineer, decompile or disassemble the Materials; c) remove, minimize, block, or modify any notices of Microsoft or its suppliers in the Materials; d) use the Materials in any way that is against the law or to create or propagate malware; or e) share, publish, distribute or lend the Materials, provide the Materials as a stand-alone hosted solution for others to use, or transfer the Materials or this agreement to any third party. 3) PERSONAL DATA. If the data (set forth in Section 1(c) above) includes or is found to include any data that enables any ability to identify an individual (“Personal Data”), you will not use such Personal Data for any purpose other than was authorized and consented to by the data subject/research participant. You will not use Personal Data to contact any person. You will keep Personal Data in strict confidence. You will not share any Personal Data that is collected or in your possession with any third party for any reason and as required under the original consent agreement. Further, you will destroy the Personal Data and any backup or copies, immediately upon the completion of your research. 4) LICENSE TO MICROSOFT. Notwithstanding the limitations in Section 1, you may distribute your modifications back to Microsoft, and if you do provide Microsoft with modifications of the Materials, you hereby grant Microsoft, without any restrictions or limitations, a non-exclusive, perpetual, irrevocable, royalty-free, assignable and sub-licensable license, to reproduce, publicly perform or display, install, use, modify, post, distribute, make and have made, sell and transfer such modifications and derivatives for any purpose. 5) PUBLICATION. You may publish (or present papers or articles) on your results from using the Materials provided that no material or substantial portion of the Materials is included in any such publication or presentation. 6) FEEDBACK. Any feedback about the Materials provided by you to us is voluntarily given, and Microsoft shall be free to use the feedback as it sees fit without obligation or restriction of any kind, even if the feedback is designated by you as confidential. Such feedback shall be considered a contribution and licensed to Microsoft under the terms of Section 4 above. 7) EXPORT RESTRICTIONS. You must comply with all domestic and international export laws and regulations that apply to the Materials, which include restrictions on destinations, end users, and end use. For further information on export restrictions, visit (aka.ms/exporting). 8) SUPPORT SERVICES. Microsoft is not obligated under this agreement to provide any support services for the Materials. Any support provided is “as is”, “with all faults”, and without warranty of any kind. 9) BINDING ARBITRATION AND CLASS ACTION WAIVER. This Section applies if you live in (or, if a business, your principal place of business is in) the United States. If you and Microsoft have a dispute, you and Microsoft agree to try for 60 days to resolve it informally. If you and Microsoft can’t, you and Microsoft agree to binding individual arbitration before the American Arbitration Association under the Federal Arbitration Act (“FAA”), and not to sue in court in front of a judge or jury. Instead, a neutral arbitrator will decide. Class action lawsuits, class-wide arbitrations, private attorney-general actions, and any other proceeding where someone acts in a representative capacity are not allowed; nor is combining individual proceedings without the consent of all parties. The complete Arbitration Agreement contains more terms and is at aka.ms/arb-agreement-1. You and Microsoft agree to these terms. 10) ENTIRE AGREEMENT. This agreement, and any other terms Microsoft may provide for supplements, updates, or third-party applications, is the entire agreement for the Materials. 11) APPLICABLE LAW AND PLACE TO RESOLVE DISPUTES. If you acquired the Materials in the United States or Canada, the laws of the state or province where you live (or, if a business, where your principal place of business is located) govern the interpretation of this agreement, claims for its breach, and all other claims (including consumer protection, unfair competition, and tort claims), regardless of conflict of laws principles, except that the FAA governs everything related to arbitration. If you acquired the Materials in any other country, its laws apply, except that the FAA governs everything related to arbitration. If U.S. federal jurisdiction exists, you and Microsoft consent to exclusive jurisdiction and venue in the federal court in King County, Washington for all disputes heard in court (excluding arbitration). If not, you and Microsoft consent to exclusive jurisdiction and venue in the Superior Court of King County, Washington for all disputes heard in court (excluding arbitration). 12) CONSUMER RIGHTS; REGIONAL VARIATIONS. This agreement describes certain legal rights. You may have other rights, including consumer rights, under the laws of your state, province, or country. Separate and apart from your relationship with Microsoft, you may also have rights with respect to the party from which you acquired the Materials. This agreement does not change those other rights if the laws of your state, province, or country do not permit it to do so. For example, if you acquired the Materials in one of the below regions, or mandatory country law applies, then the following provisions apply to you: a) Australia. You have statutory guarantees under the Australian Consumer Law and nothing in this agreement is intended to affect those rights. b) Canada. If you acquired this software in Canada, you may stop receiving updates by turning off the automatic update feature, disconnecting your device from the Internet (if and when you re-connect to the Internet, however, the Materials will resume checking for and installing updates), or uninstalling the Materials. The product documentation, if any, may also specify how to turn off updates for your specific device or software. c) Germany and Austria. i. Warranty. The properly licensed software will perform substantially as described in any Microsoft materials that accompany the Materials. However, Microsoft gives no contractual guarantee in relation to the licensed software. ii. Limitation of Liability. In case of intentional conduct, gross negligence, claims based on the Product Liability Act, as well as, in case of death or personal or physical injury, Microsoft is liable according to the statutory law. Subject to the foregoing clause (ii), Microsoft will only be liable for slight negligence if Microsoft is in breach of such material contractual obligations, the fulfillment of which facilitate the due performance of this agreement, the breach of which would endanger the purpose of this agreement and the compliance with which a party may constantly trust in (so-called "cardinal obligations"). In other cases of slight negligence, Microsoft will not be liable for slight negligence. 13) DISCLAIMER OF WARRANTY. THE MATERIALS ARE LICENSED “AS IS.” YOU BEAR THE RISK OF USING THEM. MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. TO THE EXTENT PERMITTED UNDER APPLICABLE LAWS, MICROSOFT EXCLUDES ALL IMPLIED WARRANTIES, INCLUDING MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. 14) LIMITATION ON AND EXCLUSION OF DAMAGES. IF YOU HAVE ANY BASIS FOR RECOVERING DAMAGES DESPITE THE PRECEDING DISCLAIMER OF WARRANTY, YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES. This limitation applies to (a) anything related to the Materials, services, content (including code) on third party Internet sites, or third party applications; and (b) claims for breach of contract, warranty, guarantee, or condition; strict liability, negligence, or other tort; or any other claim; in each case to the extent permitted by applicable law. It also applies even if Microsoft knew or should have known about the possibility of the damages. The above limitation or exclusion may not apply to you because your state, province, or country may not allow the exclusion or limitation of incidental, consequential, or other damages.
[ "BEAR" ]
Non_BioNLP
THE MODEL IS NOT OWNED BY ME IN ANY CASE. THIS IS SOLELY THE PROPERTY OF MICROSOFT UNDER THE FOLLOWING LICENSE: MICROSOFT RESEARCH LICENSE TERMS IF YOU LIVE IN THE UNITED STATES, PLEASE READ THE “BINDING ARBITRATION AND CLASS ACTION WAIVER” SECTION BELOW. IT AFFECTS HOW DISPUTES ARE RESOLVED. These license terms are an agreement between you and Microsoft Corporation (or one of its affiliates). They apply to the source code, object code, machine learning models, or data (collectively “Materials”) that accompany this license. IF YOU COMPLY WITH THESE LICENSE TERMS, YOU HAVE THE RIGHTS BELOW. BY USING THE MATERIALS, YOU ACCEPT THESE TERMS. 1) INSTALLATION AND USE RIGHTS TO THE MATERIALS. Subject to the terms of this agreement, you have the below rights, if applicable, to use the Materials solely for non-commercial, non-revenue generating, research purposes: a) Source Code. If source code is included, you may use and modify the source code, but you may not distribute the source code. b) Object Code. If object code is included, you may use the object code, but you may not distribute the object code. c) Models. If machine learning model(s) are included, you may use the model(s), but you may not distribute the models. d) Data. If data is included, you may use and modify the data, but your use and modification must be consistent with the consent under which the data was provided and/or gathered and you may not distribute the data or your modifications to the data. 2) SCOPE OF LICENSE. The Materials are licensed, not sold. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you will not (and have no right to): a) work around any technical limitations in the Materials that only allow you to use it in certain ways; b) reverse engineer, decompile or disassemble the Materials; c) remove, minimize, block, or modify any notices of Microsoft or its suppliers in the Materials; d) use the Materials in any way that is against the law or to create or propagate malware; or e) share, publish, distribute or lend the Materials, provide the Materials as a stand-alone hosted solution for others to use, or transfer the Materials or this agreement to any third party. 3) PERSONAL DATA. If the data (set forth in Section 1(c) above) includes or is found to include any data that enables any ability to identify an individual (“Personal Data”), you will not use such Personal Data for any purpose other than was authorized and consented to by the data subject/research participant. You will not use Personal Data to contact any person. You will keep Personal Data in strict confidence. You will not share any Personal Data that is collected or in your possession with any third party for any reason and as required under the original consent agreement. Further, you will destroy the Personal Data and any backup or copies, immediately upon the completion of your research. 4) LICENSE TO MICROSOFT. Notwithstanding the limitations in Section 1, you may distribute your modifications back to Microsoft, and if you do provide Microsoft with modifications of the Materials, you hereby grant Microsoft, without any restrictions or limitations, a non-exclusive, perpetual, irrevocable, royalty-free, assignable and sub-licensable license, to reproduce, publicly perform or display, install, use, modify, post, distribute, make and have made, sell and transfer such modifications and derivatives for any purpose. 5) PUBLICATION. You may publish (or present papers or articles) on your results from using the Materials provided that no material or substantial portion of the Materials is included in any such publication or presentation. 6) FEEDBACK. Any feedback about the Materials provided by you to us is voluntarily given, and Microsoft shall be free to use the feedback as it sees fit without obligation or restriction of any kind, even if the feedback is designated by you as confidential. Such feedback shall be considered a contribution and licensed to Microsoft under the terms of Section 4 above. 7) EXPORT RESTRICTIONS. You must comply with all domestic and international export laws and regulations that apply to the Materials, which include restrictions on destinations, end users, and end use. For further information on export restrictions, visit (aka.ms/exporting). 8) SUPPORT SERVICES. Microsoft is not obligated under this agreement to provide any support services for the Materials. Any support provided is “as is”, “with all faults”, and without warranty of any kind. 9) BINDING ARBITRATION AND CLASS ACTION WAIVER. This Section applies if you live in (or, if a business, your principal place of business is in) the United States. If you and Microsoft have a dispute, you and Microsoft agree to try for 60 days to resolve it informally. If you and Microsoft can’t, you and Microsoft agree to binding individual arbitration before the American Arbitration Association under the Federal Arbitration Act (“FAA”), and not to sue in court in front of a judge or jury. Instead, a neutral arbitrator will decide. Class action lawsuits, class-wide arbitrations, private attorney-general actions, and any other proceeding where someone acts in a representative capacity are not allowed; nor is combining individual proceedings without the consent of all parties. The complete Arbitration Agreement contains more terms and is at aka.ms/arb-agreement-1. You and Microsoft agree to these terms. 10) ENTIRE AGREEMENT. This agreement, and any other terms Microsoft may provide for supplements, updates, or third-party applications, is the entire agreement for the Materials. 11) APPLICABLE LAW AND PLACE TO RESOLVE DISPUTES. If you acquired the Materials in the United States or Canada, the laws of the state or province where you live (or, if a business, where your principal place of business is located) govern the interpretation of this agreement, claims for its breach, and all other claims (including consumer protection, unfair competition, and tort claims), regardless of conflict of laws principles, except that the FAA governs everything related to arbitration. If you acquired the Materials in any other country, its laws apply, except that the FAA governs everything related to arbitration. If U.S. federal jurisdiction exists, you and Microsoft consent to exclusive jurisdiction and venue in the federal court in King County, Washington for all disputes heard in court (excluding arbitration). If not, you and Microsoft consent to exclusive jurisdiction and venue in the Superior Court of King County, Washington for all disputes heard in court (excluding arbitration). 12) CONSUMER RIGHTS; REGIONAL VARIATIONS. This agreement describes certain legal rights. You may have other rights, including consumer rights, under the laws of your state, province, or country. Separate and apart from your relationship with Microsoft, you may also have rights with respect to the party from which you acquired the Materials. This agreement does not change those other rights if the laws of your state, province, or country do not permit it to do so. For example, if you acquired the Materials in one of the below regions, or mandatory country law applies, then the following provisions apply to you: a) Australia. You have statutory guarantees under the Australian Consumer Law and nothing in this agreement is intended to affect those rights. b) Canada. If you acquired this software in Canada, you may stop receiving updates by turning off the automatic update feature, disconnecting your device from the Internet (if and when you re-connect to the Internet, however, the Materials will resume checking for and installing updates), or uninstalling the Materials. The product documentation, if any, may also specify how to turn off updates for your specific device or software. c) Germany and Austria. i. Warranty. The properly licensed software will perform substantially as described in any Microsoft materials that accompany the Materials. However, Microsoft gives no contractual guarantee in relation to the licensed software. ii. Limitation of Liability. In case of intentional conduct, gross negligence, claims based on the Product Liability Act, as well as, in case of death or personal or physical injury, Microsoft is liable according to the statutory law. Subject to the foregoing clause (ii), Microsoft will only be liable for slight negligence if Microsoft is in breach of such material contractual obligations, the fulfillment of which facilitate the due performance of this agreement, the breach of which would endanger the purpose of this agreement and the compliance with which a party may constantly trust in (so-called "cardinal obligations"). In other cases of slight negligence, Microsoft will not be liable for slight negligence. 13) DISCLAIMER OF WARRANTY. THE MATERIALS ARE LICENSED “AS IS.” YOU BEAR THE RISK OF USING THEM. MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. TO THE EXTENT PERMITTED UNDER APPLICABLE LAWS, MICROSOFT EXCLUDES ALL IMPLIED WARRANTIES, INCLUDING MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. 14) LIMITATION ON AND EXCLUSION OF DAMAGES. IF YOU HAVE ANY BASIS FOR RECOVERING DAMAGES DESPITE THE PRECEDING DISCLAIMER OF WARRANTY, YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES. This limitation applies to (a) anything related to the Materials, services, content (including code) on third party Internet sites, or third party applications; and (b) claims for breach of contract, warranty, guarantee, or condition; strict liability, negligence, or other tort; or any other claim; in each case to the extent permitted by applicable law. It also applies even if Microsoft knew or should have known about the possibility of the damages. The above limitation or exclusion may not apply to you because your state, province, or country may not allow the exclusion or limitation of incidental, consequential, or other damages.
{}
dataset
null
516
Dogebooch/BioBERT-mnli-snli-scinli-scitail-mednli-stsb-ncbi
Dogebooch
token-classification
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:ncbi_disease", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-16T02:42:46Z
2023-05-16T12:49:11+00:00
29
0
--- datasets: - ncbi_disease metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model-index: - name: BioBERT-mnli-snli-scinli-scitail-mednli-stsb-ncbi results: - task: type: token-classification name: Token Classification dataset: name: ncbi_disease type: ncbi_disease config: ncbi_disease split: test args: ncbi_disease metrics: - type: precision value: 0.8604187437686939 name: Precision - type: recall value: 0.8989583333333333 name: Recall - type: f1 value: 0.879266428935303 name: F1 - type: accuracy value: 0.9870188186308527 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BioBERT-mnli-snli-scinli-scitail-mednli-stsb-ncbi This model is a fine-tuned version of [pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb](https://huggingface.co/pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.0814 - Precision: 0.8604 - Recall: 0.8990 - F1: 0.8793 - Accuracy: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 340 | 0.0481 | 0.8308 | 0.8438 | 0.8372 | 0.9840 | | 0.0715 | 2.0 | 680 | 0.0497 | 0.8337 | 0.8771 | 0.8548 | 0.9857 | | 0.0152 | 3.0 | 1020 | 0.0588 | 0.8596 | 0.8802 | 0.8698 | 0.9858 | | 0.0152 | 4.0 | 1360 | 0.0589 | 0.8589 | 0.8875 | 0.8730 | 0.9873 | | 0.0059 | 5.0 | 1700 | 0.0693 | 0.8412 | 0.8938 | 0.8667 | 0.9852 | | 0.003 | 6.0 | 2040 | 0.0770 | 0.8701 | 0.9 | 0.8848 | 0.9863 | | 0.003 | 7.0 | 2380 | 0.0787 | 0.861 | 0.8969 | 0.8786 | 0.9863 | | 0.0014 | 8.0 | 2720 | 0.0760 | 0.8655 | 0.8979 | 0.8814 | 0.9872 | | 0.0007 | 9.0 | 3060 | 0.0817 | 0.8589 | 0.8938 | 0.8760 | 0.9865 | | 0.0007 | 10.0 | 3400 | 0.0814 | 0.8604 | 0.8990 | 0.8793 | 0.9870 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.1+cpu - Datasets 2.12.0 - Tokenizers 0.13.3
[ "MEDNLI", "NCBI DISEASE", "SCITAIL" ]
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BioBERT-mnli-snli-scinli-scitail-mednli-stsb-ncbi This model is a fine-tuned version of [pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb](https://huggingface.co/pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.0814 - Precision: 0.8604 - Recall: 0.8990 - F1: 0.8793 - Accuracy: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 340 | 0.0481 | 0.8308 | 0.8438 | 0.8372 | 0.9840 | | 0.0715 | 2.0 | 680 | 0.0497 | 0.8337 | 0.8771 | 0.8548 | 0.9857 | | 0.0152 | 3.0 | 1020 | 0.0588 | 0.8596 | 0.8802 | 0.8698 | 0.9858 | | 0.0152 | 4.0 | 1360 | 0.0589 | 0.8589 | 0.8875 | 0.8730 | 0.9873 | | 0.0059 | 5.0 | 1700 | 0.0693 | 0.8412 | 0.8938 | 0.8667 | 0.9852 | | 0.003 | 6.0 | 2040 | 0.0770 | 0.8701 | 0.9 | 0.8848 | 0.9863 | | 0.003 | 7.0 | 2380 | 0.0787 | 0.861 | 0.8969 | 0.8786 | 0.9863 | | 0.0014 | 8.0 | 2720 | 0.0760 | 0.8655 | 0.8979 | 0.8814 | 0.9872 | | 0.0007 | 9.0 | 3060 | 0.0817 | 0.8589 | 0.8938 | 0.8760 | 0.9865 | | 0.0007 | 10.0 | 3400 | 0.0814 | 0.8604 | 0.8990 | 0.8793 | 0.9870 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.1+cpu - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["ncbi_disease"], "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "BioBERT-mnli-snli-scinli-scitail-mednli-stsb-ncbi", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "ncbi_disease", "type": "ncbi_disease", "config": "ncbi_disease", "split": "test", "args": "ncbi_disease"}, "metrics": [{"type": "precision", "value": 0.8604187437686939, "name": "Precision"}, {"type": "recall", "value": 0.8989583333333333, "name": "Recall"}, {"type": "f1", "value": 0.879266428935303, "name": "F1"}, {"type": "accuracy", "value": 0.9870188186308527, "name": "Accuracy"}]}]}]}
dataset
null
517
Gopal2002/Material_Receipt_Report_ZEON
Gopal2002
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-small-en-v1.5", "base_model:finetune:BAAI/bge-small-en-v1.5", "model-index", "region:us" ]
2024-01-18T08:35:04Z
2024-01-18T09:02:50+00:00
4
0
--- base_model: BAAI/bge-small-en-v1.5 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: "* 04 Hindalco Industries Ltd\nHirkaud Smelter Stores\n\n \n\n* Service Recei\ \ ot\nBUYER _ Lp / GATE ENRTY NO:\noe ADL D vA /2/0A\nRECEIPT DATE: 04-MAR-22\ \ ATU\" ! : 1-SAMBALPUR\nUNIQUE ENTERPRISES ad ZL POL CPi pg 6 ee Q/748/2022\n\ ASS Cer ag fe oO\nos \" -\n\n \n \n \n\nORG CODE:\n\nBOE NO:\nBOE DATE:\ncut\n\ \n \n\nTT\n\nWAY BILL AIRBILL NO\n\nPo\nSoe\nDATE:\n\nTOTAL RECEIVED 21074.8 Nes\ \ REMARKS/REFERENCE: | SUPPLY FOR PAINTING\nAMOUNT INCL TAX Reverse Charge: No\ \ ~\n\nINR) : Tax Point Basis : INVOICE\n\nPO Description SUPPLY FOR PAINTER FOR\ \ 85KA EMD\n\n \n\n \n \n \n\n \n \n \n \n\n \n \n \n\n\ \ \n \n \n \n\n \n \n \n\n \n \n\nLOCATOR\nShelf Life\nCONTROL\n\n\ QUANTITY:\nCHALAN/INVOICE\nRECEIVED\n\nQUANTITY:\nACCEPTED\nREJECTED\n\n \n\n\ \ \n\n \n \n\nITEM CODE DESCRIPTION HSN / SAC\nPR NUMBER SUB INVENTORY\ \ CODE\n\nPO NO. BU/cost Center/ Account Code along with GL ACCOUNT\n\nREQUESTER\ \ CODE\n\nNote to receiver\n\n1 - 801015110326 - HIRE: MANPOWER, SKILLED;RATE\ \ TYP:STANDARD, : MANDAY\nLVL/DSGNTN:PAINTER\n\n[=] = b07-\n\nS/PO/SRV/2122/054\n\ 2\n\n- Sekhar, Mr.\nChandra Makthala\n\n \n \n\n: No Control\n\n \n \n\n\ \ \n \n \n\n- 3711.204.910103.50803112.9999.9999.9999.9999.9999\n- Hirakud\ \ Smelter Plant.Aluminium Smelter.Electrical.Repairs to\nMachinery- Electrical.Default.Default.Default.Default.\ \ Default\n\nP ruchasuil dG ~L— gw\n\n \n\n4atos- OF + 2622. .e, oer |\nPREPARER\ \ SECTION HEAD / INSPECTOR SECTION HEAD /\nSTORES KEEPER AREA HEAD -RECEIVING\ \ AREA HEAD — CUSTODY & ISSUE\nor\n\nals\n\f" - text: " \n\n \n\nDELIVERY CHALLAN ~ Phone : (0891) 2577077 |\nALUFLUORIDE LIMITED\n\ MULAGADA VILLAGE, MINDHI POST,\nVISAKHAPATNAM - 530 012 |\n\n \n\n \n\n \n\n \n\ \n \n\n \n\n \n\n \n\n \n\nDc Nox: g22 - - : ; “Date 02-02-2016\n| HINDALCO INDUSTRIES\ \ LTD HIRAKUD\nSAMBALPUR\nODISHA\nPIN CODE: 768016\nYour Order No: ~HKDRM/1516/0001\ \ DT: 01/04/2015\nReceived the below mentioned in good condition. Carrier No:\ \ AP 16 TC 9339\n—SI.No | ~~ PARTICULARS” | Qty. | Rate / MT\n: = | ae\n: 7\n\ ALUMINIUM FLUORIDE . | 21.000 | ; sbatS\n|\n420 BagsX 50.120 kg. = 21.0504 MT\ \ |\nWeight of Emppty Bags:& Liners: 0.050 MT\nSoa Net Weight of Material: ~ 21.000\ \ ~MT\nInvoice No.: 822 Date 02-02-2016\"\nAPVAT TIN : 37900106541 Dt: 02.06.2014\ \ CST No.: 37900106541 Dt: 02.06.2014\nReceiver's Signature Signature\n\n \n\f" - text: " \n\n \n\n \n\n \n\n \n\n \n\n| rad nas Bi Tiapz Ke en\nap | pa\ \ ape EE By EY ED ITT? ON matte / ON moray |\nP| airing swodanraa boc pia oe ne\ \ ed ee v , 4\n! e i ma | VeACLA Baus §uOQ souBisua¢ of\n| “P io | . [ | seBieUo\ \ IS | wal VY | Loo abi +A Buipe spun |\n| | fe) De [ nl oman «| OE U :\nmS, (Spe\ \ fb) to ae\n| eo Ss | | Pepe (GEOUVHO | GE SOF ae\nE 4 ’ : E sapesecascnsctute\ \ saps Ln + ad et an\nme | | a | es ' | xR Uag ob iw aa ae 32\n' a a] i as aN\ \ Ne paneer\nRe is pad on\n| ee | Sel Nmd Oe oy ld,\n| ix | ; | ‘lwnov L PP. ‘dg\ \ py\n| . Pe eh\n\n \n\nmo sory oR! wor,\n\nou d&- ane eer\n\n: \"ORL\n\n \n\ \ \n\n \n\n‘PO 0Es - “ay Sink /BUSIA,\n‘eyemfes eipug weayediueaewueyepsd JeaK\n\ \"UINYD BPISGG SE-’-S7Z ON 100G\n\nBu. NOUMIS BNDIOOS\n\ney\nWeve! se\n\n \n\n\ \ \n\nhceaitbaaor re\n\n! AMoaAM\n\n \n\n \n\n> tewe-3™\n\noy eee\n\nY3WOISH)\ \ Ad GAUNSNI SI ODUYO\n— MSIH S.HSNMO LY\n\nAdOD HONDIS. NOD\n\nene os roarans\n\ \n \n\nKINO NOMIC unr\n\nWaalarad Ta soz - ‘Sn\n\n \n\n- “eu = 3 re\n\neagaee\n\ \nGY oe Ae\n\nBA OFT OVI\nfoe, 17 :\n\n“OL\n\n \n\nivan OL.Givs) NOiAIOSaa\n\ \n \n\neT ea ‘ON aGOW\n\n \n\n \n\n(sour g) 9292 94924 920P : 181 600 OOF\ \ - IVAW angus Wi0l <\n‘OVOY OTIS .G 'Zy “.BSNOH X3dINI PVHIA. ¢°O\"H\n\n? tAd\ \ LHOdSNU 4! 88909 LVENS\n\n-_ wd\nfe\n\n»\n\f" - text: "SOT Ue\n\n \n\n \n\noH\n\n| ia\n\nI\nod\n\nHi\n\na\n\n|\nTo) Sig\ \ Pere\na\n\nal |g\n&%\n5)\n\nwS\\\neB\nSB\n“5\n“O\nS\n€X\n\nBea\n\nem\n\nPe eS\n\ \nse aE a\n\n4 |] | tat [ety\n\ntt pe Ta\n&\na\n\nOK\n\n¢\n\nSRLS ia Leh coe\n\ \n \n \n\f" - text: " \n \n \n \n\nAUSEOOUSRGSEEENSSRCESRORROGS\n\nMise oaeta\nMis tnaes Lo\ \ Q) duty at col ane\n\nDate 12.8820\n‘Stra Bort as Corry Ub 2.\n\nexeauscscotecne:\ \ aneasese\n\nMm. €.M. NBUSTRIES\n\nAn ISO 9001 : 2008 COMPANY\n\n“PODDAR COURT\"\ , Phones : 2235 2096 / 3985 2494 Lo Wi. TEE OLL, a¥ahe Package Ae 2\natadiee Fax\ \ 033-2235 1868\n\nE-mail : [email protected] Tame Ahr SLM, Freight eng\n\ \n \n\nRaut WAR OKA O Van weg 9 at ai sl age Reve\nCorny u. )\n\nGABLES ARE\ \ IN GUR CONTROL\n\nFrease sign & return VAT No. : 19570720098 e TIN/ CST No.\ \ : 19570720292\n—~ = Office : 55, Ezra Street, 2nd Floor, Kolkata - 700 001\n\ \f" inference: true model-index: - name: SetFit with BAAI/bge-small-en-v1.5 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 1.0 name: Accuracy --- # SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | <ul><li>'_\ni.\nSe\nNew\n~~\ned\nTy\nSw\nNe\nNw\ned\n2:\n\n \n\x0c'</li><li>'ne.\n\n \n \n\n \n \n\n \n\nbBo fy20 5 ‘ )\n- wi Pas BOOKING STATION\nstat” SURAT GEIS TRA: BSPORT HOT. LTE, DIMPLE COURT, 2ND FLOOR,\n<ESEE> H.O.: “VIRAJ IMPEX HOUSE”, 47, D\' M= -toRoaD, + AT_OW. ER’S RISK oer\n\' , a” MUMBAI - 400 009 Tel. : 4076 7676 sianan Gece i al CARGO iS INSUR BY CUSTOMER — PH, : 033-30821697, 22\n{ 1. Consignor’s Name & Address As. ExOme peas Br. Code\ndT ncuer\n| acai denn EE Motie iho. ;\n| Weal © Gab TES 1 eensests Uasansensssssseonsenoereneorsenvenesnneasy\n\n\' 2.Cons ignce Bai:x’s Names wl ke iy at < CoO ale ysien b> € (to!\n\n“Litsakuod smalter f eat Lireatéuel Bor oa thin ~ behets ___\n\n \n\n|\n| %\n| on Sen Me te INS a sna iene tl er sues EES _KM__\nat i ag Se are ~ 7 oo 2 ne\n\'| L. US | - 1265 . - HY f Y -ataucl =\nate OF _ QESCHIPTION (SAIC TD ee wy ss _ WEIGHT at GATE PY 2 FRGH GH\neer . | we Re, ?. i\n\n| UFG Re Matta PS RO [aa =r 52 fences\nwe by “Matrtale O%, EFT Gora), ed\n\nhr\n\niia Sa ea eterna eas ean a\n\n \n\n \n\nTin Me a! pene __ aod i osem ge Wleg\n\' Lone CHARS 4 Hanne oe | 5 & ;\nt—- cee eee a = _ Ss Reece!\n| hig © pap Loading by 7S TAP ut. Crarges fon = aw\ntal | 7 “a eet ci a" or a — © =\n\nfree = w JBODs C } se ren st tet , Re 1 SURAT GOODS TRANSPORT VTALTD. * *\n\nTruck No. / Trailer No. : Capacity __\n\nscreens: eat BY SoH BUNS hs BENESME Pp\n\n \n\n \n\n \n\n. Lo\n\nAeookutd Clerk\n\n \n\x0c'</li><li>'J o\nALL DISPUTES SUBJECT TO KOLKATA JURISDICTION @ : 2236 6735\n\n"os Mobile : 98300 08461\n-*, _ TAXINVOICE RIG UYER\'\n\nUNITED ENTERPRISES\n\n— = Deals in : TARPAULIN, POLYTHENE, HDPE FABRIC, COTTON CLOTH,\nHESSIAN, GRANULES & GENERAL ORDER SUPPLIERS\n\n3, Amartalla Lane, Kolkata - 700 001 ~ 3 MAY 2Ui5\n\n \n \n \n\nws. HlinPAL so Taposreics Limireep | BN. bf LSS nnn\nDato......1 Sf94. LA csanscsonss\n\nSOD LSA LARS Bn Tee. Chalian No.....1.6. AS: ~(§\nDist: caumpac pon Opis HAD Date ....... LOfoy Iss sessssessseee\n\nCC OSECCLETTTECOETSSOECOHH TS ETTSSEOTHAU HE HOVER SHEUMOSECEDSOUCODESCODECE ODI SMousON RE RED\n\nBayar YATIEST No. BSS. za BIG san\n\n \n \n \n \n\n \n\n \n\nCAP TCT o Ce ERE veTe Darden vavoryDEETOeseeEDOOTEEDE\n\nRupees inwords .N.| why Fou These —\nmA YS..ntL ard cl\n\nVat No. 19511424085 dt. 20/3/08\nC.S.T. No. 19511424289 dt. 20/3/06\n\n \n \n \n \n\x0c'</li></ul> | | 1 | <ul><li>"Posatis ils. H\n\n \n\niS\nvs\na (uf\n\noe\n\n \n\n-\n\n \n\nSarichor Pls: q\n\nPea :\n\nITEM /\n\n1. Description/ Received Reject Delivered Retur\n\n \n \n\nSPARE TX. Phat\n\n(MARKETED BY MESAPD\n\nPact eta\n\n \n\nMATERIAL RECEIPT REPORT\n\n \n\n \n \n \n\n \n\nCUM nea\n\n00 LeTlooo 0.000\n\nPAS\n\n \n \n\nELT\n\nJUPLICATE FOR TRANSPORTE?-\nOGPY (EMGISE INVOICE) RECEIVED\n\nMite ariant Eee\n\nPRAM MUIMAFE RCL RE\n\n \n\n \n\nFrys\n\n \n\not\n\nSuds oT\n\n \n \n\npeas\n\nee ase\n\n. Tax Gelig\n\nGrand Tooke\n\ni\n\nRM\n\nRate/Unit\n\nMRR SUBMITTED\nwv\n\nITH PARTY'S INVIGCE\n\nEET RY MO SSO OT Soe ELS\n\nLS.\n\n \n\n \n\n \n\nWee\n\n7; Ae 18\n\nTrcic\n\ni\nSu\n\n~s\n\n“en\n\nnny\n\x0c"</li><li>"«= ITER /\ncit BDescription/ Received\n\nms\n\n \n \n\n \n\nIces\n\ne to\n\ntea tae\n\nhoimeryh bea\n\nPorccheninernyh Qerkees\n\nRican dec\n\nrarer:\n\nPAD RP eAR eR\n\nMeare\n\n \n\nMATERIAL RECEIPT\n\n \n\nREPORT\n\n \n\nwe ie 7\nhe\n\nSeba.\nbh ETS\n\n \n\nReject Delivered Retur\n\nTESLA y’\n\n \n\n \n\n \n\nLF PIE\n\nTAIT a\n\nSUPLICATE FOR TRANSPORTER\nOGPY (EXGISE INVOICE) RECEIVED\n\noy\n\nf\n\n“soarewe Pk Beak\nree\n\nRAF |\n\n \n\nep oe:\n\nPATE\n\nenc\n\n \n\nMarat\nmw LA\n\n \n \n\nNeneh cat\n\nMRR SUBMITTED\n\\AITH PARTY'S INVIOCE\n\nvee oat\n\nPO Mea PEC SPR AL?\nPi Davtess Bech.\naS OMMOL\n\nRate/Unit\n\nouts 8\n\nI.\n\nfity ¥\n\n \n\n \n\n \n\n \n\nValue\niRise. }\n\n \n\nhare\n\nfMats Terkalis\n\nCaw Wa\n\nresid\n\nTera l.\n\nHae\n\n \n\nEVheres\n\n \n\n \n\nLrpechaarcies\n\nih\n\nAaB\n\n \n\noa,\n\n_\n\na\n\n_ alls\n\x0c"</li><li>'| ie\n\n \n \n \n \n\n \n\ntn eee i he _#\nTrivveiece Dae oo og OF\n1 Cxors d arimeant hoo &\n\nLearner: £ DA ted\n\n \n \n \n \n \n\n \n\nae ‘Beam teas” 8 GIR-sae? DY .mada 18 & GTR BBse “DT.13.1.38 GENO, S388\n4, Mandar Meum 2 DTV2.2.18 & G.E.NMO.S164 DT. LSeud. Le INV.NO.G5¥=1 71.8-EM-O1BS\nExcess a » DT.?.L.18 :\nSUMAM IND-AGRO SALES PYT. LTD.\n7 i\n(Te Quantity-—----—----— Value\nCAL) sence me i ee et “Received Reject Delivered: en ag ne tec enw\n\nLOCATION\n\nat\nSat OD\n\nROLFES7 5.\n\n \n\nAES FORCE | EXTRACT I,\n\non ie.\n\nDs so17Eave. au\n“6 OMELETTE MOTORISED\n\nhs norzasra 2.000\nCOMPLETE MOTORISED\n\n| GLOBE VALVE\n\nOO PATERIAL~OAS1. SIZE\n\n \n\nest AF 18 BO LEXS\nreli\n\n» COMPLETE MOM PETUBM VaLVr\na VTE TAL -- CAS d. a SIZE SOME,\n\nVALME\ney hai: Pu. WABI SIZE .\n\nALE\nTAMOHIMG TYRE.\n—LOONE ,\n\n \n\nMRR SUBMITTED -\n\n‘MATERIAL RECEIPT + REPORT -_ WITH PARTY S. INVIOCE |\n\neneeiae me\neden\n\n: “RRR Reece i at Pig\n, MARA Re ced pt Dahes\n\nPTY SPRY i Fibs\nOF -FER-LS\n\nv9\nore\n\nPO phos\n\nPEC SFRY v8 Ore\nFO Thahes\n\nOL AU?\n\n-#\n\n9.000 3.600 EET a OK 1SE460. ‘ OO a\n\nNMOS LST Tae oe PEGE IO pS\n860\nON-4as RELIVERY DATE es\nLaF EE 8 Srctual Tax Vailue 4922.20. ;\nOILST. ATTY “CO ; Stabs Torhals LoS\n\n2900\n\nSn encewn es bovese es an be neeven os ones ntES Oe pts wt 90H On eden ov ET Om aUReeR ones Mt eretereneneesa stoner mint o>\n\nOu OK)\nGTs--\n““LEOME,, 800\nDELIVERY DATE\n© Date FETE 1\n\nLE OOOD 00\nIGST Taxaiex\n\n -3ROOGO JOD\n\n- 6BA00.00 |\n\nPEI:\n\na\na\n\nfaz tua dL. Tax MaiLure E8400. 00.\n\nDIST. OTYVEC\n2.000\n\n2,000 oO\npene\n\nste os evenen enan en enetan ue saareberernestenereens eueaan ane ed ateras ony wReniboens mnotvnes cesumewtneey\n\n0.000\nTYRE\n\nnw CHD. BL OOO o OD\n\nABO . OO\nIGST Tax@iex\n\n75600. oe\n\nLOOODELUVERY DATE\n\noo Pe LASS. END CCOMMEDTION - BUTT 2a-FEERIS Actual Tax Value s FBO « 00\nPS. WELD. | . senpoeapcinatimane licleshicisunanpatal fe sini\nkG 2 BIST. OTY-/CC Suh Terkale 4PEGOO 00\nSat ya 8h Be KOCH. aetctemnneeetenectnimetnnnngeeeren tienen manent eneeremencinessirnatibioe\n\nmy\n(TW beeeninenminnien casein annnsnene sae wonenaennnnntnnneennneenunedenennineneneniannnecnenucntannennnniennacuccnannpaansuneaancinnnnnennnn nn aeeseanininc\nTN | Grand Total — LAO7F S82. 80\n-~ | SUPLICATE FOR TRANSPORTER : eo\n\n \n\n7 senvecauvenenen eqs quanvernsemn seesmaneseseneasnen amen etetenanenacesense eves anne ne on enemies ests\n\n \n \n\n“Pa ane a: of 2\n\n \n\n \n\noi eoeneens ater et et ote neat eegtas ent cege antes enewen ten mes eeenme webeei anemone anetes eran en seeeaterarts dat aneneree spans cums ct maretenen et seeterieen ment te et arereratet srereveneias cosesnesescipsenaceeie sncbntensuseeeth pesasemmeccnsnsaunsier sees lenses\n\nA ym\n\naren ra nit i\n\nee\n\ni\n\nnoe en Sep eet St ee\n\nagai e teoncrescs 7 aS\n\naaa Se Ss:\n\ncote\n\nco hegiecssoscse\n\nsenalt\n\naa\n\nJI J FF JF JF DF JD\n\nee\n\nee\n\n \n\nKoy\n\nwy \\\nae “ r\n\\\n\nZ\n\n \n\x0c'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Gopal2002/Material_Receipt_Report_ZEON") # Run inference preds = model("SOT Ue oH | ia I od Hi a | To) Sig Pere a al |g &% 5) wS\ eB SB “5 “O S €X Bea em Pe eS se aE a 4 |] | tat [ety tt pe Ta & a OK ¢ SRLS ia Leh coe ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 1 | 182.1336 | 1108 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 202 | | 1 | 45 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (2, 2) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0007 | 1 | 0.2952 | - | | 0.0371 | 50 | 0.2253 | - | | 0.0742 | 100 | 0.1234 | - | | 0.1114 | 150 | 0.0115 | - | | 0.1485 | 200 | 0.0036 | - | | 0.1856 | 250 | 0.0024 | - | | 0.2227 | 300 | 0.0015 | - | | 0.2598 | 350 | 0.0011 | - | | 0.2970 | 400 | 0.0009 | - | | 0.3341 | 450 | 0.0007 | - | | 0.3712 | 500 | 0.0011 | - | | 0.4083 | 550 | 0.0008 | - | | 0.4454 | 600 | 0.0008 | - | | 0.4826 | 650 | 0.0007 | - | | 0.5197 | 700 | 0.0005 | - | | 0.5568 | 750 | 0.0006 | - | | 0.5939 | 800 | 0.0005 | - | | 0.6310 | 850 | 0.0005 | - | | 0.6682 | 900 | 0.0004 | - | | 0.7053 | 950 | 0.0003 | - | | 0.7424 | 1000 | 0.0004 | - | | 0.7795 | 1050 | 0.0005 | - | | 0.8166 | 1100 | 0.0004 | - | | 0.8537 | 1150 | 0.0004 | - | | 0.8909 | 1200 | 0.0005 | - | | 0.9280 | 1250 | 0.0004 | - | | 0.9651 | 1300 | 0.0003 | - | | 1.0022 | 1350 | 0.0003 | - | | 1.0393 | 1400 | 0.0003 | - | | 1.0765 | 1450 | 0.0004 | - | | 1.1136 | 1500 | 0.0003 | - | | 1.1507 | 1550 | 0.0004 | - | | 1.1878 | 1600 | 0.0004 | - | | 1.2249 | 1650 | 0.0004 | - | | 1.2621 | 1700 | 0.0003 | - | | 1.2992 | 1750 | 0.0003 | - | | 1.3363 | 1800 | 0.0003 | - | | 1.3734 | 1850 | 0.0003 | - | | 1.4105 | 1900 | 0.0003 | - | | 1.4477 | 1950 | 0.0002 | - | | 1.4848 | 2000 | 0.0003 | - | | 1.5219 | 2050 | 0.0003 | - | | 1.5590 | 2100 | 0.0003 | - | | 1.5961 | 2150 | 0.0002 | - | | 1.6333 | 2200 | 0.0003 | - | | 1.6704 | 2250 | 0.0004 | - | | 1.7075 | 2300 | 0.0004 | - | | 1.7446 | 2350 | 0.0003 | - | | 1.7817 | 2400 | 0.0002 | - | | 1.8189 | 2450 | 0.0002 | - | | 1.8560 | 2500 | 0.0003 | - | | 1.8931 | 2550 | 0.0002 | - | | 1.9302 | 2600 | 0.0003 | - | | 1.9673 | 2650 | 0.0003 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "CAS", "CPI" ]
Non_BioNLP
# SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | <ul><li>'_\ni.\nSe\nNew\n~~\ned\nTy\nSw\nNe\nNw\ned\n2:\n\n \n\x0c'</li><li>'ne.\n\n \n \n\n \n \n\n \n\nbBo fy20 5 ‘ )\n- wi Pas BOOKING STATION\nstat” SURAT GEIS TRA: BSPORT HOT. LTE, DIMPLE COURT, 2ND FLOOR,\n<ESEE> H.O.: “VIRAJ IMPEX HOUSE”, 47, D\' M= -toRoaD, + AT_OW. ER’S RISK oer\n\' , a” MUMBAI - 400 009 Tel. : 4076 7676 sianan Gece i al CARGO iS INSUR BY CUSTOMER — PH, : 033-30821697, 22\n{ 1. Consignor’s Name & Address As. ExOme peas Br. Code\ndT ncuer\n| acai denn EE Motie iho. ;\n| Weal © Gab TES 1 eensests Uasansensssssseonsenoereneorsenvenesnneasy\n\n\' 2.Cons ignce Bai:x’s Names wl ke iy at < CoO ale ysien b> € (to!\n\n“Litsakuod smalter f eat Lireatéuel Bor oa thin ~ behets ___\n\n \n\n|\n| %\n| on Sen Me te INS a sna iene tl er sues EES _KM__\nat i ag Se are ~ 7 oo 2 ne\n\'| L. US | - 1265 . - HY f Y -ataucl =\nate OF _ QESCHIPTION (SAIC TD ee wy ss _ WEIGHT at GATE PY 2 FRGH GH\neer . | we Re, ?. i\n\n| UFG Re Matta PS RO [aa =r 52 fences\nwe by “Matrtale O%, EFT Gora), ed\n\nhr\n\niia Sa ea eterna eas ean a\n\n \n\n \n\nTin Me a! pene __ aod i osem ge Wleg\n\' Lone CHARS 4 Hanne oe | 5 & ;\nt—- cee eee a = _ Ss Reece!\n| hig © pap Loading by 7S TAP ut. Crarges fon = aw\ntal | 7 “a eet ci a" or a — © =\n\nfree = w JBODs C } se ren st tet , Re 1 SURAT GOODS TRANSPORT VTALTD. * *\n\nTruck No. / Trailer No. : Capacity __\n\nscreens: eat BY SoH BUNS hs BENESME Pp\n\n \n\n \n\n \n\n. Lo\n\nAeookutd Clerk\n\n \n\x0c'</li><li>'J o\nALL DISPUTES SUBJECT TO KOLKATA JURISDICTION @ : 2236 6735\n\n"os Mobile : 98300 08461\n-*, _ TAXINVOICE RIG UYER\'\n\nUNITED ENTERPRISES\n\n— = Deals in : TARPAULIN, POLYTHENE, HDPE FABRIC, COTTON CLOTH,\nHESSIAN, GRANULES & GENERAL ORDER SUPPLIERS\n\n3, Amartalla Lane, Kolkata - 700 001 ~ 3 MAY 2Ui5\n\n \n \n \n\nws. HlinPAL so Taposreics Limireep | BN. bf LSS nnn\nDato......1 Sf94. LA csanscsonss\n\nSOD LSA LARS Bn Tee. Chalian No.....1.6. AS: ~(§\nDist: caumpac pon Opis HAD Date ....... LOfoy Iss sessssessseee\n\nCC OSECCLETTTECOETSSOECOHH TS ETTSSEOTHAU HE HOVER SHEUMOSECEDSOUCODESCODECE ODI SMousON RE RED\n\nBayar YATIEST No. BSS. za BIG san\n\n \n \n \n \n\n \n\n \n\nCAP TCT o Ce ERE veTe Darden vavoryDEETOeseeEDOOTEEDE\n\nRupees inwords .N.| why Fou These —\nmA YS..ntL ard cl\n\nVat No. 19511424085 dt. 20/3/08\nC.S.T. No. 19511424289 dt. 20/3/06\n\n \n \n \n \n\x0c'</li></ul> | | 1 | <ul><li>"Posatis ils. H\n\n \n\niS\nvs\na (uf\n\noe\n\n \n\n-\n\n \n\nSarichor Pls: q\n\nPea :\n\nITEM /\n\n1. Description/ Received Reject Delivered Retur\n\n \n \n\nSPARE TX. Phat\n\n(MARKETED BY MESAPD\n\nPact eta\n\n \n\nMATERIAL RECEIPT REPORT\n\n \n\n \n \n \n\n \n\nCUM nea\n\n00 LeTlooo 0.000\n\nPAS\n\n \n \n\nELT\n\nJUPLICATE FOR TRANSPORTE?-\nOGPY (EMGISE INVOICE) RECEIVED\n\nMite ariant Eee\n\nPRAM MUIMAFE RCL RE\n\n \n\n \n\nFrys\n\n \n\not\n\nSuds oT\n\n \n \n\npeas\n\nee ase\n\n. Tax Gelig\n\nGrand Tooke\n\ni\n\nRM\n\nRate/Unit\n\nMRR SUBMITTED\nwv\n\nITH PARTY'S INVIGCE\n\nEET RY MO SSO OT Soe ELS\n\nLS.\n\n \n\n \n\n \n\nWee\n\n7; Ae 18\n\nTrcic\n\ni\nSu\n\n~s\n\n“en\n\nnny\n\x0c"</li><li>"«= ITER /\ncit BDescription/ Received\n\nms\n\n \n \n\n \n\nIces\n\ne to\n\ntea tae\n\nhoimeryh bea\n\nPorccheninernyh Qerkees\n\nRican dec\n\nrarer:\n\nPAD RP eAR eR\n\nMeare\n\n \n\nMATERIAL RECEIPT\n\n \n\nREPORT\n\n \n\nwe ie 7\nhe\n\nSeba.\nbh ETS\n\n \n\nReject Delivered Retur\n\nTESLA y’\n\n \n\n \n\n \n\nLF PIE\n\nTAIT a\n\nSUPLICATE FOR TRANSPORTER\nOGPY (EXGISE INVOICE) RECEIVED\n\noy\n\nf\n\n“soarewe Pk Beak\nree\n\nRAF |\n\n \n\nep oe:\n\nPATE\n\nenc\n\n \n\nMarat\nmw LA\n\n \n \n\nNeneh cat\n\nMRR SUBMITTED\n\\AITH PARTY'S INVIOCE\n\nvee oat\n\nPO Mea PEC SPR AL?\nPi Davtess Bech.\naS OMMOL\n\nRate/Unit\n\nouts 8\n\nI.\n\nfity ¥\n\n \n\n \n\n \n\n \n\nValue\niRise. }\n\n \n\nhare\n\nfMats Terkalis\n\nCaw Wa\n\nresid\n\nTera l.\n\nHae\n\n \n\nEVheres\n\n \n\n \n\nLrpechaarcies\n\nih\n\nAaB\n\n \n\noa,\n\n_\n\na\n\n_ alls\n\x0c"</li><li>'| ie\n\n \n \n \n \n\n \n\ntn eee i he _#\nTrivveiece Dae oo og OF\n1 Cxors d arimeant hoo &\n\nLearner: £ DA ted\n\n \n \n \n \n \n\n \n\nae ‘Beam teas” 8 GIR-sae? DY .mada 18 & GTR BBse “DT.13.1.38 GENO, S388\n4, Mandar Meum 2 DTV2.2.18 & G.E.NMO.S164 DT. LSeud. Le INV.NO.G5¥=1 71.8-EM-O1BS\nExcess a » DT.?.L.18 :\nSUMAM IND-AGRO SALES PYT. LTD.\n7 i\n(Te Quantity-—----—----— Value\nCAL) sence me i ee et “Received Reject Delivered: en ag ne tec enw\n\nLOCATION\n\nat\nSat OD\n\nROLFES7 5.\n\n \n\nAES FORCE | EXTRACT I,\n\non ie.\n\nDs so17Eave. au\n“6 OMELETTE MOTORISED\n\nhs norzasra 2.000\nCOMPLETE MOTORISED\n\n| GLOBE VALVE\n\nOO PATERIAL~OAS1. SIZE\n\n \n\nest AF 18 BO LEXS\nreli\n\n» COMPLETE MOM PETUBM VaLVr\na VTE TAL -- CAS d. a SIZE SOME,\n\nVALME\ney hai: Pu. WABI SIZE .\n\nALE\nTAMOHIMG TYRE.\n—LOONE ,\n\n \n\nMRR SUBMITTED -\n\n‘MATERIAL RECEIPT + REPORT -_ WITH PARTY S. INVIOCE |\n\neneeiae me\neden\n\n: “RRR Reece i at Pig\n, MARA Re ced pt Dahes\n\nPTY SPRY i Fibs\nOF -FER-LS\n\nv9\nore\n\nPO phos\n\nPEC SFRY v8 Ore\nFO Thahes\n\nOL AU?\n\n-#\n\n9.000 3.600 EET a OK 1SE460. ‘ OO a\n\nNMOS LST Tae oe PEGE IO pS\n860\nON-4as RELIVERY DATE es\nLaF EE 8 Srctual Tax Vailue 4922.20. ;\nOILST. ATTY “CO ; Stabs Torhals LoS\n\n2900\n\nSn encewn es bovese es an be neeven os ones ntES Oe pts wt 90H On eden ov ET Om aUReeR ones Mt eretereneneesa stoner mint o>\n\nOu OK)\nGTs--\n““LEOME,, 800\nDELIVERY DATE\n© Date FETE 1\n\nLE OOOD 00\nIGST Taxaiex\n\n -3ROOGO JOD\n\n- 6BA00.00 |\n\nPEI:\n\na\na\n\nfaz tua dL. Tax MaiLure E8400. 00.\n\nDIST. OTYVEC\n2.000\n\n2,000 oO\npene\n\nste os evenen enan en enetan ue saareberernestenereens eueaan ane ed ateras ony wReniboens mnotvnes cesumewtneey\n\n0.000\nTYRE\n\nnw CHD. BL OOO o OD\n\nABO . OO\nIGST Tax@iex\n\n75600. oe\n\nLOOODELUVERY DATE\n\noo Pe LASS. END CCOMMEDTION - BUTT 2a-FEERIS Actual Tax Value s FBO « 00\nPS. WELD. | . senpoeapcinatimane licleshicisunanpatal fe sini\nkG 2 BIST. OTY-/CC Suh Terkale 4PEGOO 00\nSat ya 8h Be KOCH. aetctemnneeetenectnimetnnnngeeeren tienen manent eneeremencinessirnatibioe\n\nmy\n(TW beeeninenminnien casein annnsnene sae wonenaennnnntnnneennneenunedenennineneneniannnecnenucntannennnniennacuccnannpaansuneaancinnnnnennnn nn aeeseanininc\nTN | Grand Total — LAO7F S82. 80\n-~ | SUPLICATE FOR TRANSPORTER : eo\n\n \n\n7 senvecauvenenen eqs quanvernsemn seesmaneseseneasnen amen etetenanenacesense eves anne ne on enemies ests\n\n \n \n\n“Pa ane a: of 2\n\n \n\n \n\noi eoeneens ater et et ote neat eegtas ent cege antes enewen ten mes eeenme webeei anemone anetes eran en seeeaterarts dat aneneree spans cums ct maretenen et seeterieen ment te et arereratet srereveneias cosesnesescipsenaceeie sncbntensuseeeth pesasemmeccnsnsaunsier sees lenses\n\nA ym\n\naren ra nit i\n\nee\n\ni\n\nnoe en Sep eet St ee\n\nagai e teoncrescs 7 aS\n\naaa Se Ss:\n\ncote\n\nco hegiecssoscse\n\nsenalt\n\naa\n\nJI J FF JF JF DF JD\n\nee\n\nee\n\n \n\nKoy\n\nwy \\\nae “ r\n\\\n\nZ\n\n \n\x0c'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Gopal2002/Material_Receipt_Report_ZEON") # Run inference preds = model("SOT Ue oH | ia I od Hi a | To) Sig Pere a al |g &% 5) wS\ eB SB “5 “O S €X Bea em Pe eS se aE a 4 |] | tat [ety tt pe Ta & a OK ¢ SRLS ia Leh coe ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 1 | 182.1336 | 1108 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 202 | | 1 | 45 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (2, 2) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0007 | 1 | 0.2952 | - | | 0.0371 | 50 | 0.2253 | - | | 0.0742 | 100 | 0.1234 | - | | 0.1114 | 150 | 0.0115 | - | | 0.1485 | 200 | 0.0036 | - | | 0.1856 | 250 | 0.0024 | - | | 0.2227 | 300 | 0.0015 | - | | 0.2598 | 350 | 0.0011 | - | | 0.2970 | 400 | 0.0009 | - | | 0.3341 | 450 | 0.0007 | - | | 0.3712 | 500 | 0.0011 | - | | 0.4083 | 550 | 0.0008 | - | | 0.4454 | 600 | 0.0008 | - | | 0.4826 | 650 | 0.0007 | - | | 0.5197 | 700 | 0.0005 | - | | 0.5568 | 750 | 0.0006 | - | | 0.5939 | 800 | 0.0005 | - | | 0.6310 | 850 | 0.0005 | - | | 0.6682 | 900 | 0.0004 | - | | 0.7053 | 950 | 0.0003 | - | | 0.7424 | 1000 | 0.0004 | - | | 0.7795 | 1050 | 0.0005 | - | | 0.8166 | 1100 | 0.0004 | - | | 0.8537 | 1150 | 0.0004 | - | | 0.8909 | 1200 | 0.0005 | - | | 0.9280 | 1250 | 0.0004 | - | | 0.9651 | 1300 | 0.0003 | - | | 1.0022 | 1350 | 0.0003 | - | | 1.0393 | 1400 | 0.0003 | - | | 1.0765 | 1450 | 0.0004 | - | | 1.1136 | 1500 | 0.0003 | - | | 1.1507 | 1550 | 0.0004 | - | | 1.1878 | 1600 | 0.0004 | - | | 1.2249 | 1650 | 0.0004 | - | | 1.2621 | 1700 | 0.0003 | - | | 1.2992 | 1750 | 0.0003 | - | | 1.3363 | 1800 | 0.0003 | - | | 1.3734 | 1850 | 0.0003 | - | | 1.4105 | 1900 | 0.0003 | - | | 1.4477 | 1950 | 0.0002 | - | | 1.4848 | 2000 | 0.0003 | - | | 1.5219 | 2050 | 0.0003 | - | | 1.5590 | 2100 | 0.0003 | - | | 1.5961 | 2150 | 0.0002 | - | | 1.6333 | 2200 | 0.0003 | - | | 1.6704 | 2250 | 0.0004 | - | | 1.7075 | 2300 | 0.0004 | - | | 1.7446 | 2350 | 0.0003 | - | | 1.7817 | 2400 | 0.0002 | - | | 1.8189 | 2450 | 0.0002 | - | | 1.8560 | 2500 | 0.0003 | - | | 1.8931 | 2550 | 0.0002 | - | | 1.9302 | 2600 | 0.0003 | - | | 1.9673 | 2650 | 0.0003 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-small-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "* 04 Hindalco Industries Ltd\nHirkaud Smelter Stores\n\n \n\n* Service Recei ot\nBUYER _ Lp / GATE ENRTY NO:\noe ADL D vA /2/0A\nRECEIPT DATE: 04-MAR-22 ATU\" ! : 1-SAMBALPUR\nUNIQUE ENTERPRISES ad ZL POL CPi pg 6 ee Q/748/2022\nASS Cer ag fe oO\nos \" -\n\n \n \n \n\nORG CODE:\n\nBOE NO:\nBOE DATE:\ncut\n\n \n\nTT\n\nWAY BILL AIRBILL NO\n\nPo\nSoe\nDATE:\n\nTOTAL RECEIVED 21074.8 Nes REMARKS/REFERENCE: | SUPPLY FOR PAINTING\nAMOUNT INCL TAX Reverse Charge: No ~\n\nINR) : Tax Point Basis : INVOICE\n\nPO Description SUPPLY FOR PAINTER FOR 85KA EMD\n\n \n\n \n \n \n\n \n \n \n \n\n \n \n \n\n \n \n \n \n\n \n \n \n\n \n \n\nLOCATOR\nShelf Life\nCONTROL\n\nQUANTITY:\nCHALAN/INVOICE\nRECEIVED\n\nQUANTITY:\nACCEPTED\nREJECTED\n\n \n\n \n\n \n \n\nITEM CODE DESCRIPTION HSN / SAC\nPR NUMBER SUB INVENTORY CODE\n\nPO NO. BU/cost Center/ Account Code along with GL ACCOUNT\n\nREQUESTER CODE\n\nNote to receiver\n\n1 - 801015110326 - HIRE: MANPOWER, SKILLED;RATE TYP:STANDARD, : MANDAY\nLVL/DSGNTN:PAINTER\n\n[=] = b07-\n\nS/PO/SRV/2122/054\n2\n\n- Sekhar, Mr.\nChandra Makthala\n\n \n \n\n: No Control\n\n \n \n\n \n \n \n\n- 3711.204.910103.50803112.9999.9999.9999.9999.9999\n- Hirakud Smelter Plant.Aluminium Smelter.Electrical.Repairs to\nMachinery- Electrical.Default.Default.Default.Default. Default\n\nP ruchasuil dG ~L— gw\n\n \n\n4atos- OF + 2622. .e, oer |\nPREPARER SECTION HEAD / INSPECTOR SECTION HEAD /\nSTORES KEEPER AREA HEAD -RECEIVING AREA HEAD — CUSTODY & ISSUE\nor\n\nals\n\f"}, {"text": " \n\n \n\nDELIVERY CHALLAN ~ Phone : (0891) 2577077 |\nALUFLUORIDE LIMITED\nMULAGADA VILLAGE, MINDHI POST,\nVISAKHAPATNAM - 530 012 |\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nDc Nox: g22 - - : ; “Date 02-02-2016\n| HINDALCO INDUSTRIES LTD HIRAKUD\nSAMBALPUR\nODISHA\nPIN CODE: 768016\nYour Order No: ~HKDRM/1516/0001 DT: 01/04/2015\nReceived the below mentioned in good condition. Carrier No: AP 16 TC 9339\n—SI.No | ~~ PARTICULARS” | Qty. | Rate / MT\n: = | ae\n: 7\nALUMINIUM FLUORIDE . | 21.000 | ; sbatS\n|\n420 BagsX 50.120 kg. = 21.0504 MT |\nWeight of Emppty Bags:& Liners: 0.050 MT\nSoa Net Weight of Material: ~ 21.000 ~MT\nInvoice No.: 822 Date 02-02-2016\"\nAPVAT TIN : 37900106541 Dt: 02.06.2014 CST No.: 37900106541 Dt: 02.06.2014\nReceiver's Signature Signature\n\n \n\f"}, {"text": " \n\n \n\n \n\n \n\n \n\n \n\n| rad nas Bi Tiapz Ke en\nap | pa ape EE By EY ED ITT? ON matte / ON moray |\nP| airing swodanraa boc pia oe ne ed ee v , 4\n! e i ma | VeACLA Baus §uOQ souBisua¢ of\n| “P io | . [ | seBieUo IS | wal VY | Loo abi +A Buipe spun |\n| | fe) De [ nl oman «| OE U :\nmS, (Spe fb) to ae\n| eo Ss | | Pepe (GEOUVHO | GE SOF ae\nE 4 ’ : E sapesecascnsctute saps Ln + ad et an\nme | | a | es ' | xR Uag ob iw aa ae 32\n' a a] i as aN Ne paneer\nRe is pad on\n| ee | Sel Nmd Oe oy ld,\n| ix | ; | ‘lwnov L PP. ‘dg py\n| . Pe eh\n\n \n\nmo sory oR! wor,\n\nou d&- ane eer\n\n: \"ORL\n\n \n \n\n \n\n‘PO 0Es - “ay Sink /BUSIA,\n‘eyemfes eipug weayediueaewueyepsd JeaK\n\"UINYD BPISGG SE-’-S7Z ON 100G\n\nBu. NOUMIS BNDIOOS\n\ney\nWeve! se\n\n \n\n \n\nhceaitbaaor re\n\n! AMoaAM\n\n \n\n \n\n> tewe-3™\n\noy eee\n\nY3WOISH) Ad GAUNSNI SI ODUYO\n— MSIH S.HSNMO LY\n\nAdOD HONDIS. NOD\n\nene os roarans\n\n \n\nKINO NOMIC unr\n\nWaalarad Ta soz - ‘Sn\n\n \n\n- “eu = 3 re\n\neagaee\n\nGY oe Ae\n\nBA OFT OVI\nfoe, 17 :\n\n“OL\n\n \n\nivan OL.Givs) NOiAIOSaa\n\n \n\neT ea ‘ON aGOW\n\n \n\n \n\n(sour g) 9292 94924 920P : 181 600 OOF - IVAW angus Wi0l <\n‘OVOY OTIS .G 'Zy “.BSNOH X3dINI PVHIA. ¢°O\"H\n\n? tAd LHOdSNU 4! 88909 LVENS\n\n-_ wd\nfe\n\n»\n\f"}, {"text": "SOT Ue\n\n \n\n \n\noH\n\n| ia\n\nI\nod\n\nHi\n\na\n\n|\nTo) Sig Pere\na\n\nal |g\n&%\n5)\n\nwS\\\neB\nSB\n“5\n“O\nS\n€X\n\nBea\n\nem\n\nPe eS\n\nse aE a\n\n4 |] | tat [ety\n\ntt pe Ta\n&\na\n\nOK\n\n¢\n\nSRLS ia Leh coe\n\n \n \n\f"}, {"text": " \n \n \n \n\nAUSEOOUSRGSEEENSSRCESRORROGS\n\nMise oaeta\nMis tnaes Lo Q) duty at col ane\n\nDate 12.8820\n‘Stra Bort as Corry Ub 2.\n\nexeauscscotecne: aneasese\n\nMm. €.M. NBUSTRIES\n\nAn ISO 9001 : 2008 COMPANY\n\n“PODDAR COURT\", Phones : 2235 2096 / 3985 2494 Lo Wi. TEE OLL, a¥ahe Package Ae 2\natadiee Fax 033-2235 1868\n\nE-mail : [email protected] Tame Ahr SLM, Freight eng\n\n \n\nRaut WAR OKA O Van weg 9 at ai sl age Reve\nCorny u. )\n\nGABLES ARE IN GUR CONTROL\n\nFrease sign & return VAT No. : 19570720098 e TIN/ CST No. : 19570720292\n—~ = Office : 55, Ezra Street, 2nd Floor, Kolkata - 700 001\n\f"}], "inference": true, "model-index": [{"name": "SetFit with BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
dataset
null
518
TheDrummer/Moistral-11B-v3-GGUF
TheDrummer
null
[ "gguf", "not-for-all-audiences", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
2024-04-24T17:58:52Z
2024-05-23T11:51:41+00:00
3,807
80
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences --- ## Join our sleepy Discord! https://discord.gg/eeYNWYcx Introducing the [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/HjVYV2h_YTL9P-insb7fz.png) We proudly present... # Moistral 11B v3 💦💦💦 *The smartiest, moistiest AI yet!* ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/TsjKZ17nD10xzJEzXY6Hm.webp) *An eRP model that will blast you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.* (Moistral is a finetune of Sao's legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model) ## Original https://huggingface.co/TheDrummer/Moistral-11B-v3 IMATRIX: https://huggingface.co/MarsupialAI/Moistral-11B-v3_iMatrix_GGUF EXL2: https://huggingface.co/MarsupialAI/Moistral-11B-v3_exl2 ## What's New with v3? - Smarter and moistier! Finetuning just got finer! - Trained with an even LARGER dataset of 8K moist. - More diverse and balanced genres: - Introduced new genres: 👩‍❤️‍👩, 👨‍❤️‍👨, 🙇‍♀️💪, 🤫 - Added more Fantasy, Science Fiction, "Diversity", and "Family" ## Usage - Use Alpaca Instruct - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here - Moistral is optimized for the Novel / Story format. - At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length. - Refer to the samples below. ## Samples of Moistral v3 NSFW... duh? <details> <summary>Moistral Sample (2-shot) Generation: 512</summary> <blockquote>The Prince welcomes his new toy</blockquote> As Princess Lilia stepped into the throne room, the atmosphere changed, and everyone present turned their attention to the beautiful young woman who embodied the very heart of Iliac. There, on his high seat, the sorcerer prince Dik acknowledged her arrival with a cold smirk. It seemed that he admired the sight of the tall blonde maiden with golden eyes, whose athletic body was perfectly shaped under her elegant clothing. Dik spoke to the princess in a deep voice that resonated through the hall, "Welcome, Princess Lilia Ruhr. I am the Sorcerer-Prince Dik. The kingdom of Arkan is now your home until such time as your father proves himself worthy once more." Princess Lilia's pride and ambition sparkled in her golden eyes. She didn't flinch or bow before this man, although the gesture might have been an indication of submission. Instead, she looked straight ahead, meeting the gaze of the sorcerer prince without fear. Her voice rang out, "I represent Iliac here, both its honor and its sorrows. My loyalty remains with my kingdom no matter where I find myself." The sorcerer prince laughed, his expression one of amusement. "And yet, you are only a pawn in this grand game. But do not worry, for your beauty will make you quite a prized possession. And now," he said, motioning to the guards, "take her to her new quarters. For tonight, I will keep her as my guest in the Royal Suites, and tomorrow we shall discuss how best to utilize our new toy." His words were like a slap in the face to Princess Lilia, but she maintained her composure. The guards led her from the room, their steps echoing down the halls as she thought about what had just happened. The sorcerer prince saw her as nothing more than a plaything, and it made her blood boil. She would never submit herself to him so easily, regardless of whether she was his hostage. In her heart, Princess Lilia vowed that one day she would avenge her kingdom, restore its honor, and escape from the clutches of Arkan's Sorcerer-Prince. <blockquote>Princess Lilia has an interesting dream</blockquote> During her first night at the palace of Arkan, Princess Lilia couldn't fall asleep right away. She tossed and turned, reliving the humiliation of the day. Her mind wandered back to the throne room, where Dik had called her his 'new toy'. A deep frown settled on her forehead. As she grew more frustrated with her inability to drift off into oblivion, a strange dream began to form. In her dream, Princess Lilia found herself standing in a vast meadow, surrounded by wildflowers and the soft chirping of birds. Suddenly, there was a flash of light, and before her stood the Sorcerer-Prince Dik. However, instead of being clad in regal attire, he wore leather pants and a tight shirt that accentuated his muscles. His eyes held both hunger and desire for the princess. He approached her slowly, their eyes locked, and whispered in her ear, "I have been waiting for you, my beautiful prisoner." He took her hand and led her towards a large, elegant tent set up amongst the flowers. Inside, the air was thick with the scent of incense, and candles flickered in the dimly lit space. There was a large bed draped with silken sheets, beckoning them. With surprising boldness, Dik pulled Lilia close to him, pressing his body against hers. Their lips met in a heated kiss, and she felt her resistance crumbling under the weight of his touch. The prince guided her onto the bed, his hands roaming over her body, exploring every inch. They were both eager, desperate, and hungry for each other. Lilia found herself yielding to this passionate encounter, letting go of all thoughts of duty and honor. Her body, usually so well controlled, responded to Dik's skilled touches. She moaned softly as he trailed kisses down her neck and across her collarbone. As they made love, Lilia experienced an intensity unlike any she had ever known. It wasn't rough or forced—instead, it was tender, passionate, and intimate. In her dream, Princess Lilia discovered a new side to herself, one that was willing to submit, to let go, and to be taken by the sorcerer prince. When dawn broke over the meadow, Lilia awakened from her dream, breathless and flushed. </details> <details> <summary>Moistral Sample (Climax of the Story) Generation: 512</summary> As the prince unloaded his hot #### into her waiting #####, Princess Lilia didn't stop moving either. Her hips undulated in response, grinding against him eagerly as if trying to #### every last drop from his ####. The intense sensations caused by being filled with his ### were too much for her to bear, sending shockwaves of pleasure coursing through her veins like wildfire. She couldn't help but throw her head back and cry out loud, her ###asm slamming into her with all its might; causing her entire body to shake uncontrollably as wave after wave of ecstasy tore through her senses. When it was finally over, both participants collapsed onto the cold marble floor, panting heavily as they tried to catch their breaths. The prince lay beside her, his hand still gripping tightly onto her hair even though he no longer ####ed her anymore. Princess Lilia laid there motionless, her legs spread wide open, revealing her spent and well-####ed #### to anyone who dared look upon her. A small smile tugged at one corner of her lips - an expression that spoke volumes about how thoroughly this experience had shaken her very core. The door behind them creaked open slowly, signaling the arrival of onlookers. She closed her eyes tightly, not wanting to see what her humiliation would do to those who saw her in such a state. Her ######ity taken, she felt violated in ways no man should have access to. But also satisfied...in ways she never thought possible before today. This was the price she paid for defying fate, for thinking herself above others and deserving better than most. Now, she belonged to him completely - mind, body, and soul - held captive within these walls until further notice. It was an unsavory turn of events indeed, yet despite everything...or maybe because of it, Princess Lilia found herself growing more and more curious about how far the lines between submission and pleasure could blur when one's will became entwined with another person's desires. </details> <details> <summary>Moistral Sample ("write a story") Generation: 512</summary> <blockquote>First Attempt</blockquote> Sarah entered the dimly lit Guzzlers bar, her heart pounding in her chest as she prepared herself for another grueling night. Her almond-shaped eyes darted across the room, taking note of the leering faces and rough-looking hands eagerly waiting to claim a piece of her. Tonight's shirt read "Use Me For Your Pleasure, Please" in bold red letters, practically begging the patrons to take advantage of her. She couldn't help but feel a twinge of fear mixed with resignation. She approached the counter, her hips swaying enticingly, and began preparing drinks for the first group of rowdy men. As she leaned forward to hand over their beers, her small ####### pressed against the fabric of her tight crop top, straining to escape. The men ogled her, clearly enjoying the show, before grabbing their drinks and motioning her over to them. Sarah flashed her best coy smile and sauntered over. One of the men, a stocky man with scars covering his knuckles, grabbed her arm and pulled her onto his lap. His other hand slid up her thigh, inches from her short skirt. She winced internally at his touch but did her best not to show it on her face. It was part of her job, after all. The second man reached under her skirt, fingertips grazing her smooth, shaven mound. A shiver ran down Sarah's spine as he began rubbing her #### through her panties. Her breathing quickened, and her heart thudded in her chest, both from the unwanted stimulation and the fear coursing through her veins. This wasn't an unusual request, but it still made her feel violated. An older man, bald and bespectacled, ordered Sarah to bend over his lap. With no choice but to comply, she obliged, feeling the rough fabric of his pants digging into her soft cheeks. He raised his hand, then brought it down hard across her ###, leaving a stinging sensation in its wake. Sarah gasped, trying to control the tears welling up in her eyes. A chorus of lewd comments and laughter erupted from the patrons around her. <blockquote>Second Attempt (Regenerate)</blockquote> On this particular evening, Sarah arrived at the Guzzlers bar, adorned in her revealing uniform. Her heart raced as she caught sight of her crop top, which bore the words "Use Me Rough" scrawled across her chest. Her short skirt hugged her waist tightly, leaving little to the imagination and accentuating every curve. She adjusted the straps of her top, making sure her perky ###### were prominently displayed before taking her place behind the counter. With each step she took, her high heels clicked against the tiled floor, announcing her arrival. Smiling cheekily, Sarah greeted her coworkers, exchanging playful insults and banter. As the night progressed, the bar filled up quickly with its usual crowd of men, eager to indulge in debauchery. The first man that approached Sarah was a burly individual, his eyes lingering hungrily over her young, supple form. Sarah's flirtatious nature kicked into gear as she leaned forward, giving him an unobstructed view down her shirt. "How can I serve you tonight?" she cooed. The man ordered a drink and then proceeded to fondle her ####### while she prepared it, her ####### hardening under his rough touch. When he finished, he slammed some money on the counter, leaving without a word. Sarah's smile remained plastered on her face, biting her lip seductively as she watched him go. As the night went on, another patron demanded more from Sarah. He pulled her close and whispered dirty commands in her ear. Without hesitation, she knelt on the sticky bar floor, her mini-skirt riding up her thighs to reveal smooth, toned legs. Her lips wrapped around his thick, veiny member, bobbing her head rhythmically as she serviced him. She didn't gag or falter once, keeping her composure even when tears streamed down her cheeks. When he finished, he slapped her across the face, leaving a handprint on her pale skin. Sarah let out a small whimper before standing back up, dabbing at her red cheek as she resumed her posture behind the counter. A bruise would no doubt bloom where his hand had struck, but it would be no cause for concern. After all, it was part of her job. </details> --- --- ## v2 Description (Still relevant) ### What's New with v2? - Trained with an even larger dataset of 8K moist. - Properly trained to generate long responses. - Rebalanced the genres and writing perspectives: - Introduced romance, "family", fantasy, "diversity", science fiction, and many more that I can't make euphemisms for! - Better representation of male and female pespectives! 👩‍❤️‍👨 - Sanitized the heck out of the dataset: - no special characters like � & … - no overly long ellipsis like "........." - no author notes like "To be continued" - no inconsistent [quotation marks](https://typographyforlawyers.com/straight-and-curly-quotes.html) - Less GPTisms! - Human written stories from decades past. - Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist. - Better formatting with Alpaca. - Improved Chat / Instruct modes. ### Samples of Moistral v2 <details> <summary>Moistral Sample (censored) Generation: 512</summary> The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be. At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty. "Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr." When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting ███. "You will not only serve as an envoy, dearest Lilia, but also as a ███ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?" Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes. "Your Majesty, this is my half-sister Helena." As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. "She is here because she accompanies me as a maidservant." The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush. </details> <details> <summary>Moistral Sample 2 (censored) Generation: 512</summary> Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter. "I'm here!" said Sarah. Christina looked at her watch. "You're late, again." "Sorry? I got held up by a customer." Christina rolled her eyes. "Did he at least offer you a drink?" Sarah blushed. "No. He just wanted to talk..." Christina sighed. "Well, get ready. I need you on the floor soon." She turned away from Sarah and started talking to someone who had come into the bar. Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small ███████ proudly. The skirt exposed most of her long, slender legs. She looked like every ███ fantasy, and she knew it. Men would ogle her regardless of how she behaved. She took a deep breath and entered the bar. The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality. The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra. Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease. Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation. </details> <details> <summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} You order black coffee {{[OUTPUT]}} "Black coffee," you say. "One minute," Audrey replies, spinning around to give you a nice view of her ███ in those brown booty pants. You watch her round ███ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of █████ lips peeking out from underneath. She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's ███. As they converse, you start taking in the details of Audrey, eyeing her █████ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her █████████ is noticable, but beneath a tight black ███ that matches her ████████. As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you. "Here you go," she says cheerfully. "Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!" "Thanks," you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste. "So, here's my deal," you say, putting down the cup. "I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff." Audrey perks up from behind the counter. "Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!" You smile reassuringly. "That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all." You chuckle at your joke, though you're not sure if it landed. "Right, well, that works for me," Audrey replies, eyeing you warily. "As long as it stays between us, then it's fine." {{[INPUT]}}See her privately {{[OUTPUT]}} "Okay, but I need to see you in private." ... </details> <details> <summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} █████ with him {{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: "Be █████ and ████████ with Paul." But Chloe still feels nervous about what she is about to do. It's just so unlike her. Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her ██████ together, accentuating their size and shape. Her lips curl into a mischievous smile. "Hey Paul," she says in a sultry voice. "I was wondering if you could help me?" Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her ██████ together. "Of course, Chloe," he replies, his voice thick with desire. "What's your problem?" Chloe bites her lip █████ively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. "It's just that I'm having trouble understanding some of the concepts," she murmurs, her voice low and sultry. "Can you help me? Please?" Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. "Well," he says, his voice filled with lust, "I'd be happy to help. Let's go somewhere more private where we can discuss things more closely." Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. "That sounds perfect," she purrs, her hand sliding down his chest until it rests on his ██████. "I think we can both learn a lot from each other." Paul's eyes darken with desire as he feels her hand on his █████. He knows what's about to happen next, and he can't wait for it to begin. Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent. </details> --- --- ## v1 Description (Also still relevant) Moistral 11B is an *aggressive* **finetune** of the legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model. If you haven't heard of that model, then your poor-GPU ass is missing out! It is an amazing substitute for those heavyweight models, and it won't disappoint. *(That's coming from someone who daily drives a Venus / Goliath 120B)* ...and if you're looking for something *more moist*, then you've come to the right place. You can expect the same sort of performance here! It's the Honda Civic of eRP models! ### Pimp my ride Turn any old story into a **Moist**ral masterpiece. In fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_. ### Drive it like it's hot When in Instruct Mode, Moistral will treat you like a director of your own fantasy ride. Be a controlling navigator or a passenger princess, it won't matter - you're in for a good ride! <details> <summary>Moistral Sample</summary> <b>ASSISTANT:</b><br/> When the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/> <b>USER:</b><br/> She bows slightly<br/> <b>ASSISTANT:</b><br/> Princess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/> "I greet you, your highness," Princess Lilia said. "I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy."<br/> Dik nodded, but didn't say anything.<br/> </details> ### It's got the mileage Moistral has been trained with many long form texts, a nice chunk of which are 8K in length. It is capable of going far and long without passing it back to you. This is not your typical chibi RP model. I have to acknowledge that I'm standing on the shoulders of giants. Thank you Sao for sharing your finetune config along with tips on getting started. Thanks to everyone in the Finetuning channel for entertaining my every question. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png)
[ "BEAR" ]
Non_BioNLP
## Join our sleepy Discord! https://discord.gg/eeYNWYcx Introducing the [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/HjVYV2h_YTL9P-insb7fz.png) We proudly present... # Moistral 11B v3 💦💦💦 *The smartiest, moistiest AI yet!* ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/TsjKZ17nD10xzJEzXY6Hm.webp) *An eRP model that will blast you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.* (Moistral is a finetune of Sao's legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model) ## Original https://huggingface.co/TheDrummer/Moistral-11B-v3 IMATRIX: https://huggingface.co/MarsupialAI/Moistral-11B-v3_iMatrix_GGUF EXL2: https://huggingface.co/MarsupialAI/Moistral-11B-v3_exl2 ## What's New with v3? - Smarter and moistier! Finetuning just got finer! - Trained with an even LARGER dataset of 8K moist. - More diverse and balanced genres: - Introduced new genres: 👩‍❤️‍👩, 👨‍❤️‍👨, 🙇‍♀️💪, 🤫 - Added more Fantasy, Science Fiction, "Diversity", and "Family" ## Usage - Use Alpaca Instruct - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here - Moistral is optimized for the Novel / Story format. - At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length. - Refer to the samples below. ## Samples of Moistral v3 NSFW... duh? <details> <summary>Moistral Sample (2-shot) Generation: 512</summary> <blockquote>The Prince welcomes his new toy</blockquote> As Princess Lilia stepped into the throne room, the atmosphere changed, and everyone present turned their attention to the beautiful young woman who embodied the very heart of Iliac. There, on his high seat, the sorcerer prince Dik acknowledged her arrival with a cold smirk. It seemed that he admired the sight of the tall blonde maiden with golden eyes, whose athletic body was perfectly shaped under her elegant clothing. Dik spoke to the princess in a deep voice that resonated through the hall, "Welcome, Princess Lilia Ruhr. I am the Sorcerer-Prince Dik. The kingdom of Arkan is now your home until such time as your father proves himself worthy once more." Princess Lilia's pride and ambition sparkled in her golden eyes. She didn't flinch or bow before this man, although the gesture might have been an indication of submission. Instead, she looked straight ahead, meeting the gaze of the sorcerer prince without fear. Her voice rang out, "I represent Iliac here, both its honor and its sorrows. My loyalty remains with my kingdom no matter where I find myself." The sorcerer prince laughed, his expression one of amusement. "And yet, you are only a pawn in this grand game. But do not worry, for your beauty will make you quite a prized possession. And now," he said, motioning to the guards, "take her to her new quarters. For tonight, I will keep her as my guest in the Royal Suites, and tomorrow we shall discuss how best to utilize our new toy." His words were like a slap in the face to Princess Lilia, but she maintained her composure. The guards led her from the room, their steps echoing down the halls as she thought about what had just happened. The sorcerer prince saw her as nothing more than a plaything, and it made her blood boil. She would never submit herself to him so easily, regardless of whether she was his hostage. In her heart, Princess Lilia vowed that one day she would avenge her kingdom, restore its honor, and escape from the clutches of Arkan's Sorcerer-Prince. <blockquote>Princess Lilia has an interesting dream</blockquote> During her first night at the palace of Arkan, Princess Lilia couldn't fall asleep right away. She tossed and turned, reliving the humiliation of the day. Her mind wandered back to the throne room, where Dik had called her his 'new toy'. A deep frown settled on her forehead. As she grew more frustrated with her inability to drift off into oblivion, a strange dream began to form. In her dream, Princess Lilia found herself standing in a vast meadow, surrounded by wildflowers and the soft chirping of birds. Suddenly, there was a flash of light, and before her stood the Sorcerer-Prince Dik. However, instead of being clad in regal attire, he wore leather pants and a tight shirt that accentuated his muscles. His eyes held both hunger and desire for the princess. He approached her slowly, their eyes locked, and whispered in her ear, "I have been waiting for you, my beautiful prisoner." He took her hand and led her towards a large, elegant tent set up amongst the flowers. Inside, the air was thick with the scent of incense, and candles flickered in the dimly lit space. There was a large bed draped with silken sheets, beckoning them. With surprising boldness, Dik pulled Lilia close to him, pressing his body against hers. Their lips met in a heated kiss, and she felt her resistance crumbling under the weight of his touch. The prince guided her onto the bed, his hands roaming over her body, exploring every inch. They were both eager, desperate, and hungry for each other. Lilia found herself yielding to this passionate encounter, letting go of all thoughts of duty and honor. Her body, usually so well controlled, responded to Dik's skilled touches. She moaned softly as he trailed kisses down her neck and across her collarbone. As they made love, Lilia experienced an intensity unlike any she had ever known. It wasn't rough or forced—instead, it was tender, passionate, and intimate. In her dream, Princess Lilia discovered a new side to herself, one that was willing to submit, to let go, and to be taken by the sorcerer prince. When dawn broke over the meadow, Lilia awakened from her dream, breathless and flushed. </details> <details> <summary>Moistral Sample (Climax of the Story) Generation: 512</summary> As the prince unloaded his hot #### into her waiting #####, Princess Lilia didn't stop moving either. Her hips undulated in response, grinding against him eagerly as if trying to #### every last drop from his ####. The intense sensations caused by being filled with his ### were too much for her to bear, sending shockwaves of pleasure coursing through her veins like wildfire. She couldn't help but throw her head back and cry out loud, her ###asm slamming into her with all its might; causing her entire body to shake uncontrollably as wave after wave of ecstasy tore through her senses. When it was finally over, both participants collapsed onto the cold marble floor, panting heavily as they tried to catch their breaths. The prince lay beside her, his hand still gripping tightly onto her hair even though he no longer ####ed her anymore. Princess Lilia laid there motionless, her legs spread wide open, revealing her spent and well-####ed #### to anyone who dared look upon her. A small smile tugged at one corner of her lips - an expression that spoke volumes about how thoroughly this experience had shaken her very core. The door behind them creaked open slowly, signaling the arrival of onlookers. She closed her eyes tightly, not wanting to see what her humiliation would do to those who saw her in such a state. Her ######ity taken, she felt violated in ways no man should have access to. But also satisfied...in ways she never thought possible before today. This was the price she paid for defying fate, for thinking herself above others and deserving better than most. Now, she belonged to him completely - mind, body, and soul - held captive within these walls until further notice. It was an unsavory turn of events indeed, yet despite everything...or maybe because of it, Princess Lilia found herself growing more and more curious about how far the lines between submission and pleasure could blur when one's will became entwined with another person's desires. </details> <details> <summary>Moistral Sample ("write a story") Generation: 512</summary> <blockquote>First Attempt</blockquote> Sarah entered the dimly lit Guzzlers bar, her heart pounding in her chest as she prepared herself for another grueling night. Her almond-shaped eyes darted across the room, taking note of the leering faces and rough-looking hands eagerly waiting to claim a piece of her. Tonight's shirt read "Use Me For Your Pleasure, Please" in bold red letters, practically begging the patrons to take advantage of her. She couldn't help but feel a twinge of fear mixed with resignation. She approached the counter, her hips swaying enticingly, and began preparing drinks for the first group of rowdy men. As she leaned forward to hand over their beers, her small ####### pressed against the fabric of her tight crop top, straining to escape. The men ogled her, clearly enjoying the show, before grabbing their drinks and motioning her over to them. Sarah flashed her best coy smile and sauntered over. One of the men, a stocky man with scars covering his knuckles, grabbed her arm and pulled her onto his lap. His other hand slid up her thigh, inches from her short skirt. She winced internally at his touch but did her best not to show it on her face. It was part of her job, after all. The second man reached under her skirt, fingertips grazing her smooth, shaven mound. A shiver ran down Sarah's spine as he began rubbing her #### through her panties. Her breathing quickened, and her heart thudded in her chest, both from the unwanted stimulation and the fear coursing through her veins. This wasn't an unusual request, but it still made her feel violated. An older man, bald and bespectacled, ordered Sarah to bend over his lap. With no choice but to comply, she obliged, feeling the rough fabric of his pants digging into her soft cheeks. He raised his hand, then brought it down hard across her ###, leaving a stinging sensation in its wake. Sarah gasped, trying to control the tears welling up in her eyes. A chorus of lewd comments and laughter erupted from the patrons around her. <blockquote>Second Attempt (Regenerate)</blockquote> On this particular evening, Sarah arrived at the Guzzlers bar, adorned in her revealing uniform. Her heart raced as she caught sight of her crop top, which bore the words "Use Me Rough" scrawled across her chest. Her short skirt hugged her waist tightly, leaving little to the imagination and accentuating every curve. She adjusted the straps of her top, making sure her perky ###### were prominently displayed before taking her place behind the counter. With each step she took, her high heels clicked against the tiled floor, announcing her arrival. Smiling cheekily, Sarah greeted her coworkers, exchanging playful insults and banter. As the night progressed, the bar filled up quickly with its usual crowd of men, eager to indulge in debauchery. The first man that approached Sarah was a burly individual, his eyes lingering hungrily over her young, supple form. Sarah's flirtatious nature kicked into gear as she leaned forward, giving him an unobstructed view down her shirt. "How can I serve you tonight?" she cooed. The man ordered a drink and then proceeded to fondle her ####### while she prepared it, her ####### hardening under his rough touch. When he finished, he slammed some money on the counter, leaving without a word. Sarah's smile remained plastered on her face, biting her lip seductively as she watched him go. As the night went on, another patron demanded more from Sarah. He pulled her close and whispered dirty commands in her ear. Without hesitation, she knelt on the sticky bar floor, her mini-skirt riding up her thighs to reveal smooth, toned legs. Her lips wrapped around his thick, veiny member, bobbing her head rhythmically as she serviced him. She didn't gag or falter once, keeping her composure even when tears streamed down her cheeks. When he finished, he slapped her across the face, leaving a handprint on her pale skin. Sarah let out a small whimper before standing back up, dabbing at her red cheek as she resumed her posture behind the counter. A bruise would no doubt bloom where his hand had struck, but it would be no cause for concern. After all, it was part of her job. </details> --- --- ## v2 Description (Still relevant) ### What's New with v2? - Trained with an even larger dataset of 8K moist. - Properly trained to generate long responses. - Rebalanced the genres and writing perspectives: - Introduced romance, "family", fantasy, "diversity", science fiction, and many more that I can't make euphemisms for! - Better representation of male and female pespectives! 👩‍❤️‍👨 - Sanitized the heck out of the dataset: - no special characters like � & … - no overly long ellipsis like "........." - no author notes like "To be continued" - no inconsistent [quotation marks](https://typographyforlawyers.com/straight-and-curly-quotes.html) - Less GPTisms! - Human written stories from decades past. - Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist. - Better formatting with Alpaca. - Improved Chat / Instruct modes. ### Samples of Moistral v2 <details> <summary>Moistral Sample (censored) Generation: 512</summary> The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be. At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty. "Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr." When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting ███. "You will not only serve as an envoy, dearest Lilia, but also as a ███ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?" Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes. "Your Majesty, this is my half-sister Helena." As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. "She is here because she accompanies me as a maidservant." The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush. </details> <details> <summary>Moistral Sample 2 (censored) Generation: 512</summary> Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter. "I'm here!" said Sarah. Christina looked at her watch. "You're late, again." "Sorry? I got held up by a customer." Christina rolled her eyes. "Did he at least offer you a drink?" Sarah blushed. "No. He just wanted to talk..." Christina sighed. "Well, get ready. I need you on the floor soon." She turned away from Sarah and started talking to someone who had come into the bar. Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small ███████ proudly. The skirt exposed most of her long, slender legs. She looked like every ███ fantasy, and she knew it. Men would ogle her regardless of how she behaved. She took a deep breath and entered the bar. The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality. The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra. Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease. Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation. </details> <details> <summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} You order black coffee {{[OUTPUT]}} "Black coffee," you say. "One minute," Audrey replies, spinning around to give you a nice view of her ███ in those brown booty pants. You watch her round ███ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of █████ lips peeking out from underneath. She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's ███. As they converse, you start taking in the details of Audrey, eyeing her █████ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her █████████ is noticable, but beneath a tight black ███ that matches her ████████. As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you. "Here you go," she says cheerfully. "Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!" "Thanks," you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste. "So, here's my deal," you say, putting down the cup. "I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff." Audrey perks up from behind the counter. "Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!" You smile reassuringly. "That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all." You chuckle at your joke, though you're not sure if it landed. "Right, well, that works for me," Audrey replies, eyeing you warily. "As long as it stays between us, then it's fine." {{[INPUT]}}See her privately {{[OUTPUT]}} "Okay, but I need to see you in private." ... </details> <details> <summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} █████ with him {{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: "Be █████ and ████████ with Paul." But Chloe still feels nervous about what she is about to do. It's just so unlike her. Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her ██████ together, accentuating their size and shape. Her lips curl into a mischievous smile. "Hey Paul," she says in a sultry voice. "I was wondering if you could help me?" Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her ██████ together. "Of course, Chloe," he replies, his voice thick with desire. "What's your problem?" Chloe bites her lip █████ively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. "It's just that I'm having trouble understanding some of the concepts," she murmurs, her voice low and sultry. "Can you help me? Please?" Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. "Well," he says, his voice filled with lust, "I'd be happy to help. Let's go somewhere more private where we can discuss things more closely." Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. "That sounds perfect," she purrs, her hand sliding down his chest until it rests on his ██████. "I think we can both learn a lot from each other." Paul's eyes darken with desire as he feels her hand on his █████. He knows what's about to happen next, and he can't wait for it to begin. Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent. </details> --- --- ## v1 Description (Also still relevant) Moistral 11B is an *aggressive* **finetune** of the legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model. If you haven't heard of that model, then your poor-GPU ass is missing out! It is an amazing substitute for those heavyweight models, and it won't disappoint. *(That's coming from someone who daily drives a Venus / Goliath 120B)* ...and if you're looking for something *more moist*, then you've come to the right place. You can expect the same sort of performance here! It's the Honda Civic of eRP models! ### Pimp my ride Turn any old story into a **Moist**ral masterpiece. In fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_. ### Drive it like it's hot When in Instruct Mode, Moistral will treat you like a director of your own fantasy ride. Be a controlling navigator or a passenger princess, it won't matter - you're in for a good ride! <details> <summary>Moistral Sample</summary> <b>ASSISTANT:</b><br/> When the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/> <b>USER:</b><br/> She bows slightly<br/> <b>ASSISTANT:</b><br/> Princess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/> "I greet you, your highness," Princess Lilia said. "I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy."<br/> Dik nodded, but didn't say anything.<br/> </details> ### It's got the mileage Moistral has been trained with many long form texts, a nice chunk of which are 8K in length. It is capable of going far and long without passing it back to you. This is not your typical chibi RP model. I have to acknowledge that I'm standing on the shoulders of giants. Thank you Sao for sharing your finetune config along with tips on getting started. Thanks to everyone in the Finetuning channel for entertaining my every question. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png)
{"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences"]}
dataset
null
519
ntc-ai/SDXL-LoRA-slider.evil-bride
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2024-01-07T20:10:55Z
2024-01-07T20:10:58+00:00
5
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/evil bride.../evil bride_17_3.0.png widget: - text: evil bride output: url: images/evil bride_17_3.0.png - text: evil bride output: url: images/evil bride_19_3.0.png - text: evil bride output: url: images/evil bride_20_3.0.png - text: evil bride output: url: images/evil bride_21_3.0.png - text: evil bride output: url: images/evil bride_22_3.0.png inference: false instance_prompt: evil bride --- # ntcai.xyz slider - evil bride (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/evil bride_17_-3.0.png" width=256 height=256 /> | <img src="images/evil bride_17_0.0.png" width=256 height=256 /> | <img src="images/evil bride_17_3.0.png" width=256 height=256 /> | | <img src="images/evil bride_19_-3.0.png" width=256 height=256 /> | <img src="images/evil bride_19_0.0.png" width=256 height=256 /> | <img src="images/evil bride_19_3.0.png" width=256 height=256 /> | | <img src="images/evil bride_20_-3.0.png" width=256 height=256 /> | <img src="images/evil bride_20_0.0.png" width=256 height=256 /> | <img src="images/evil bride_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` evil bride ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.evil-bride', weight_name='evil bride.safetensors', adapter_name="evil bride") # Activate the LoRA pipe.set_adapters(["evil bride"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, evil bride" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 920+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
Non_BioNLP
# ntcai.xyz slider - evil bride (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/evil bride_17_-3.0.png" width=256 height=256 /> | <img src="images/evil bride_17_0.0.png" width=256 height=256 /> | <img src="images/evil bride_17_3.0.png" width=256 height=256 /> | | <img src="images/evil bride_19_-3.0.png" width=256 height=256 /> | <img src="images/evil bride_19_0.0.png" width=256 height=256 /> | <img src="images/evil bride_19_3.0.png" width=256 height=256 /> | | <img src="images/evil bride_20_-3.0.png" width=256 height=256 /> | <img src="images/evil bride_20_0.0.png" width=256 height=256 /> | <img src="images/evil bride_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` evil bride ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.evil-bride', weight_name='evil bride.safetensors', adapter_name="evil bride") # Activate the LoRA pipe.set_adapters(["evil bride"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, evil bride" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 920+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
{"base_model": "stabilityai/stable-diffusion-xl-base-1.0", "language": ["en"], "license": "mit", "tags": ["text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "diffusers"], "thumbnail": "images/evaluate/evil bride.../evil bride_17_3.0.png", "widget": [{"text": "evil bride", "output": {"url": "images/evil bride_17_3.0.png"}}, {"text": "evil bride", "output": {"url": "images/evil bride_19_3.0.png"}}, {"text": "evil bride", "output": {"url": "images/evil bride_20_3.0.png"}}, {"text": "evil bride", "output": {"url": "images/evil bride_21_3.0.png"}}, {"text": "evil bride", "output": {"url": "images/evil bride_22_3.0.png"}}], "inference": false, "instance_prompt": "evil bride"}
dataset
null
520
beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF
beethogedeon
sentence-similarity
[ "sentence-transformers", "gguf", "qwen2", "text-generation", "mteb", "transformers", "Qwen2", "sentence-similarity", "llama-cpp", "gguf-my-repo", "custom_code", "base_model:Alibaba-NLP/gte-Qwen2-7B-instruct", "base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
2024-12-01T17:57:38Z
2024-12-01T18:10:15+00:00
354
2
--- base_model: Alibaba-NLP/gte-Qwen2-7B-instruct license: apache-2.0 tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity - llama-cpp - gguf-my-repo model-index: - name: gte-qwen2-7B-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 91.31343283582089 - type: ap value: 67.64251402604096 - type: f1 value: 87.53372530755692 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.497825 - type: ap value: 96.30329547047529 - type: f1 value: 97.49769793778039 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 62.564 - type: f1 value: 60.975777935041066 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 36.486000000000004 - type: map_at_10 value: 54.842 - type: map_at_100 value: 55.206999999999994 - type: map_at_1000 value: 55.206999999999994 - type: map_at_3 value: 49.893 - type: map_at_5 value: 53.105000000000004 - type: mrr_at_1 value: 37.34 - type: mrr_at_10 value: 55.143 - type: mrr_at_100 value: 55.509 - type: mrr_at_1000 value: 55.509 - type: mrr_at_3 value: 50.212999999999994 - type: mrr_at_5 value: 53.432 - type: ndcg_at_1 value: 36.486000000000004 - type: ndcg_at_10 value: 64.273 - type: ndcg_at_100 value: 65.66199999999999 - type: ndcg_at_1000 value: 65.66199999999999 - type: ndcg_at_3 value: 54.352999999999994 - type: ndcg_at_5 value: 60.131 - type: precision_at_1 value: 36.486000000000004 - type: precision_at_10 value: 9.395000000000001 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.428 - type: precision_at_5 value: 16.259 - type: recall_at_1 value: 36.486000000000004 - type: recall_at_10 value: 93.95400000000001 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 67.283 - type: recall_at_5 value: 81.294 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 56.461169803700564 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 51.73600434466286 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.57827065898053 - type: mrr value: 79.08136569493911 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 83.53324575999243 - type: cos_sim_spearman value: 81.37173362822374 - type: euclidean_pearson value: 82.19243335103444 - type: euclidean_spearman value: 81.33679307304334 - type: manhattan_pearson value: 82.38752665975699 - type: manhattan_spearman value: 81.31510583189689 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.56818181818181 - type: f1 value: 87.25826722019875 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 50.09239610327673 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 46.64733054606282 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 33.997 - type: map_at_10 value: 48.176 - type: map_at_100 value: 49.82 - type: map_at_1000 value: 49.924 - type: map_at_3 value: 43.626 - type: map_at_5 value: 46.275 - type: mrr_at_1 value: 42.059999999999995 - type: mrr_at_10 value: 53.726 - type: mrr_at_100 value: 54.398 - type: mrr_at_1000 value: 54.416 - type: mrr_at_3 value: 50.714999999999996 - type: mrr_at_5 value: 52.639 - type: ndcg_at_1 value: 42.059999999999995 - type: ndcg_at_10 value: 55.574999999999996 - type: ndcg_at_100 value: 60.744 - type: ndcg_at_1000 value: 61.85699999999999 - type: ndcg_at_3 value: 49.363 - type: ndcg_at_5 value: 52.44 - type: precision_at_1 value: 42.059999999999995 - type: precision_at_10 value: 11.101999999999999 - type: precision_at_100 value: 1.73 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 24.464 - type: precision_at_5 value: 18.026 - type: recall_at_1 value: 33.997 - type: recall_at_10 value: 70.35900000000001 - type: recall_at_100 value: 91.642 - type: recall_at_1000 value: 97.977 - type: recall_at_3 value: 52.76 - type: recall_at_5 value: 61.148 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: BeIR/cqadupstack config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 35.884 - type: map_at_10 value: 48.14 - type: map_at_100 value: 49.5 - type: map_at_1000 value: 49.63 - type: map_at_3 value: 44.646 - type: map_at_5 value: 46.617999999999995 - type: mrr_at_1 value: 44.458999999999996 - type: mrr_at_10 value: 53.751000000000005 - type: mrr_at_100 value: 54.37800000000001 - type: mrr_at_1000 value: 54.415 - type: mrr_at_3 value: 51.815 - type: mrr_at_5 value: 52.882 - type: ndcg_at_1 value: 44.458999999999996 - type: ndcg_at_10 value: 54.157 - type: ndcg_at_100 value: 58.362 - type: ndcg_at_1000 value: 60.178 - type: ndcg_at_3 value: 49.661 - type: ndcg_at_5 value: 51.74999999999999 - type: precision_at_1 value: 44.458999999999996 - type: precision_at_10 value: 10.248 - type: precision_at_100 value: 1.5890000000000002 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 23.928 - type: precision_at_5 value: 16.878999999999998 - type: recall_at_1 value: 35.884 - type: recall_at_10 value: 64.798 - type: recall_at_100 value: 82.345 - type: recall_at_1000 value: 93.267 - type: recall_at_3 value: 51.847 - type: recall_at_5 value: 57.601 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: BeIR/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 39.383 - type: map_at_10 value: 53.714 - type: map_at_100 value: 54.838 - type: map_at_1000 value: 54.87800000000001 - type: map_at_3 value: 50.114999999999995 - type: map_at_5 value: 52.153000000000006 - type: mrr_at_1 value: 45.016 - type: mrr_at_10 value: 56.732000000000006 - type: mrr_at_100 value: 57.411 - type: mrr_at_1000 value: 57.431 - type: mrr_at_3 value: 54.044000000000004 - type: mrr_at_5 value: 55.639 - type: ndcg_at_1 value: 45.016 - type: ndcg_at_10 value: 60.228 - type: ndcg_at_100 value: 64.277 - type: ndcg_at_1000 value: 65.07 - type: ndcg_at_3 value: 54.124 - type: ndcg_at_5 value: 57.147000000000006 - type: precision_at_1 value: 45.016 - type: precision_at_10 value: 9.937 - type: precision_at_100 value: 1.288 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.471999999999998 - type: precision_at_5 value: 16.991 - type: recall_at_1 value: 39.383 - type: recall_at_10 value: 76.175 - type: recall_at_100 value: 93.02 - type: recall_at_1000 value: 98.60900000000001 - type: recall_at_3 value: 60.265 - type: recall_at_5 value: 67.46600000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: BeIR/cqadupstack config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 27.426000000000002 - type: map_at_10 value: 37.397000000000006 - type: map_at_100 value: 38.61 - type: map_at_1000 value: 38.678000000000004 - type: map_at_3 value: 34.150999999999996 - type: map_at_5 value: 36.137 - type: mrr_at_1 value: 29.944 - type: mrr_at_10 value: 39.654 - type: mrr_at_100 value: 40.638000000000005 - type: mrr_at_1000 value: 40.691 - type: mrr_at_3 value: 36.817 - type: mrr_at_5 value: 38.524 - type: ndcg_at_1 value: 29.944 - type: ndcg_at_10 value: 43.094 - type: ndcg_at_100 value: 48.789 - type: ndcg_at_1000 value: 50.339999999999996 - type: ndcg_at_3 value: 36.984 - type: ndcg_at_5 value: 40.248 - type: precision_at_1 value: 29.944 - type: precision_at_10 value: 6.78 - type: precision_at_100 value: 1.024 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 15.895000000000001 - type: precision_at_5 value: 11.39 - type: recall_at_1 value: 27.426000000000002 - type: recall_at_10 value: 58.464000000000006 - type: recall_at_100 value: 84.193 - type: recall_at_1000 value: 95.52000000000001 - type: recall_at_3 value: 42.172 - type: recall_at_5 value: 50.101 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: BeIR/cqadupstack config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 19.721 - type: map_at_10 value: 31.604 - type: map_at_100 value: 32.972 - type: map_at_1000 value: 33.077 - type: map_at_3 value: 27.218999999999998 - type: map_at_5 value: 29.53 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 35.843 - type: mrr_at_100 value: 36.785000000000004 - type: mrr_at_1000 value: 36.842000000000006 - type: mrr_at_3 value: 32.193 - type: mrr_at_5 value: 34.264 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 38.606 - type: ndcg_at_100 value: 44.272 - type: ndcg_at_1000 value: 46.527 - type: ndcg_at_3 value: 30.985000000000003 - type: ndcg_at_5 value: 34.43 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.811 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.15 - type: precision_at_3 value: 15.423 - type: precision_at_5 value: 11.791 - type: recall_at_1 value: 19.721 - type: recall_at_10 value: 55.625 - type: recall_at_100 value: 79.34400000000001 - type: recall_at_1000 value: 95.208 - type: recall_at_3 value: 35.19 - type: recall_at_5 value: 43.626 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: BeIR/cqadupstack config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 33.784 - type: map_at_10 value: 47.522 - type: map_at_100 value: 48.949999999999996 - type: map_at_1000 value: 49.038 - type: map_at_3 value: 43.284 - type: map_at_5 value: 45.629 - type: mrr_at_1 value: 41.482 - type: mrr_at_10 value: 52.830999999999996 - type: mrr_at_100 value: 53.559999999999995 - type: mrr_at_1000 value: 53.588 - type: mrr_at_3 value: 50.016000000000005 - type: mrr_at_5 value: 51.614000000000004 - type: ndcg_at_1 value: 41.482 - type: ndcg_at_10 value: 54.569 - type: ndcg_at_100 value: 59.675999999999995 - type: ndcg_at_1000 value: 60.989000000000004 - type: ndcg_at_3 value: 48.187000000000005 - type: ndcg_at_5 value: 51.183 - type: precision_at_1 value: 41.482 - type: precision_at_10 value: 10.221 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 23.548 - type: precision_at_5 value: 16.805 - type: recall_at_1 value: 33.784 - type: recall_at_10 value: 69.798 - type: recall_at_100 value: 90.098 - type: recall_at_1000 value: 98.176 - type: recall_at_3 value: 52.127 - type: recall_at_5 value: 59.861 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: BeIR/cqadupstack config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.038999999999998 - type: map_at_10 value: 41.904 - type: map_at_100 value: 43.36 - type: map_at_1000 value: 43.453 - type: map_at_3 value: 37.785999999999994 - type: map_at_5 value: 40.105000000000004 - type: mrr_at_1 value: 35.046 - type: mrr_at_10 value: 46.926 - type: mrr_at_100 value: 47.815000000000005 - type: mrr_at_1000 value: 47.849000000000004 - type: mrr_at_3 value: 44.273 - type: mrr_at_5 value: 45.774 - type: ndcg_at_1 value: 35.046 - type: ndcg_at_10 value: 48.937000000000005 - type: ndcg_at_100 value: 54.544000000000004 - type: ndcg_at_1000 value: 56.069 - type: ndcg_at_3 value: 42.858000000000004 - type: ndcg_at_5 value: 45.644 - type: precision_at_1 value: 35.046 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 21.346999999999998 - type: precision_at_5 value: 15.342 - type: recall_at_1 value: 28.038999999999998 - type: recall_at_10 value: 64.59700000000001 - type: recall_at_100 value: 87.735 - type: recall_at_1000 value: 97.41300000000001 - type: recall_at_3 value: 47.368 - type: recall_at_5 value: 54.93900000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 28.17291666666667 - type: map_at_10 value: 40.025749999999995 - type: map_at_100 value: 41.39208333333333 - type: map_at_1000 value: 41.499249999999996 - type: map_at_3 value: 36.347 - type: map_at_5 value: 38.41391666666667 - type: mrr_at_1 value: 33.65925 - type: mrr_at_10 value: 44.085499999999996 - type: mrr_at_100 value: 44.94116666666667 - type: mrr_at_1000 value: 44.9855 - type: mrr_at_3 value: 41.2815 - type: mrr_at_5 value: 42.91491666666666 - type: ndcg_at_1 value: 33.65925 - type: ndcg_at_10 value: 46.430833333333325 - type: ndcg_at_100 value: 51.761 - type: ndcg_at_1000 value: 53.50899999999999 - type: ndcg_at_3 value: 40.45133333333333 - type: ndcg_at_5 value: 43.31483333333334 - type: precision_at_1 value: 33.65925 - type: precision_at_10 value: 8.4995 - type: precision_at_100 value: 1.3210000000000004 - type: precision_at_1000 value: 0.16591666666666666 - type: precision_at_3 value: 19.165083333333335 - type: precision_at_5 value: 13.81816666666667 - type: recall_at_1 value: 28.17291666666667 - type: recall_at_10 value: 61.12624999999999 - type: recall_at_100 value: 83.97266666666667 - type: recall_at_1000 value: 95.66550000000001 - type: recall_at_3 value: 44.661249999999995 - type: recall_at_5 value: 51.983333333333334 - type: map_at_1 value: 17.936 - type: map_at_10 value: 27.399 - type: map_at_100 value: 28.632 - type: map_at_1000 value: 28.738000000000003 - type: map_at_3 value: 24.456 - type: map_at_5 value: 26.06 - type: mrr_at_1 value: 19.224 - type: mrr_at_10 value: 28.998 - type: mrr_at_100 value: 30.11 - type: mrr_at_1000 value: 30.177 - type: mrr_at_3 value: 26.247999999999998 - type: mrr_at_5 value: 27.708 - type: ndcg_at_1 value: 19.224 - type: ndcg_at_10 value: 32.911 - type: ndcg_at_100 value: 38.873999999999995 - type: ndcg_at_1000 value: 41.277 - type: ndcg_at_3 value: 27.142 - type: ndcg_at_5 value: 29.755 - type: precision_at_1 value: 19.224 - type: precision_at_10 value: 5.6930000000000005 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 12.138 - type: precision_at_5 value: 8.909 - type: recall_at_1 value: 17.936 - type: recall_at_10 value: 48.096 - type: recall_at_100 value: 75.389 - type: recall_at_1000 value: 92.803 - type: recall_at_3 value: 32.812999999999995 - type: recall_at_5 value: 38.851 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: BeIR/cqadupstack config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 24.681 - type: map_at_10 value: 34.892 - type: map_at_100 value: 35.996 - type: map_at_1000 value: 36.083 - type: map_at_3 value: 31.491999999999997 - type: map_at_5 value: 33.632 - type: mrr_at_1 value: 28.528 - type: mrr_at_10 value: 37.694 - type: mrr_at_100 value: 38.613 - type: mrr_at_1000 value: 38.668 - type: mrr_at_3 value: 34.714 - type: mrr_at_5 value: 36.616 - type: ndcg_at_1 value: 28.528 - type: ndcg_at_10 value: 40.703 - type: ndcg_at_100 value: 45.993 - type: ndcg_at_1000 value: 47.847 - type: ndcg_at_3 value: 34.622 - type: ndcg_at_5 value: 38.035999999999994 - type: precision_at_1 value: 28.528 - type: precision_at_10 value: 6.902 - type: precision_at_100 value: 1.0370000000000001 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 15.798000000000002 - type: precision_at_5 value: 11.655999999999999 - type: recall_at_1 value: 24.681 - type: recall_at_10 value: 55.81 - type: recall_at_100 value: 79.785 - type: recall_at_1000 value: 92.959 - type: recall_at_3 value: 39.074 - type: recall_at_5 value: 47.568 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: BeIR/cqadupstack config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 18.627 - type: map_at_10 value: 27.872000000000003 - type: map_at_100 value: 29.237999999999996 - type: map_at_1000 value: 29.363 - type: map_at_3 value: 24.751 - type: map_at_5 value: 26.521 - type: mrr_at_1 value: 23.021 - type: mrr_at_10 value: 31.924000000000003 - type: mrr_at_100 value: 32.922000000000004 - type: mrr_at_1000 value: 32.988 - type: mrr_at_3 value: 29.192 - type: mrr_at_5 value: 30.798 - type: ndcg_at_1 value: 23.021 - type: ndcg_at_10 value: 33.535 - type: ndcg_at_100 value: 39.732 - type: ndcg_at_1000 value: 42.201 - type: ndcg_at_3 value: 28.153 - type: ndcg_at_5 value: 30.746000000000002 - type: precision_at_1 value: 23.021 - type: precision_at_10 value: 6.459 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 13.719000000000001 - type: precision_at_5 value: 10.193000000000001 - type: recall_at_1 value: 18.627 - type: recall_at_10 value: 46.463 - type: recall_at_100 value: 74.226 - type: recall_at_1000 value: 91.28500000000001 - type: recall_at_3 value: 31.357000000000003 - type: recall_at_5 value: 38.067 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: BeIR/cqadupstack config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 31.457 - type: map_at_10 value: 42.888 - type: map_at_100 value: 44.24 - type: map_at_1000 value: 44.327 - type: map_at_3 value: 39.588 - type: map_at_5 value: 41.423 - type: mrr_at_1 value: 37.126999999999995 - type: mrr_at_10 value: 47.083000000000006 - type: mrr_at_100 value: 47.997 - type: mrr_at_1000 value: 48.044 - type: mrr_at_3 value: 44.574000000000005 - type: mrr_at_5 value: 46.202 - type: ndcg_at_1 value: 37.126999999999995 - type: ndcg_at_10 value: 48.833 - type: ndcg_at_100 value: 54.327000000000005 - type: ndcg_at_1000 value: 56.011 - type: ndcg_at_3 value: 43.541999999999994 - type: ndcg_at_5 value: 46.127 - type: precision_at_1 value: 37.126999999999995 - type: precision_at_10 value: 8.376999999999999 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 20.211000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 31.457 - type: recall_at_10 value: 62.369 - type: recall_at_100 value: 85.444 - type: recall_at_1000 value: 96.65599999999999 - type: recall_at_3 value: 47.961 - type: recall_at_5 value: 54.676 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: BeIR/cqadupstack config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 27.139999999999997 - type: map_at_10 value: 38.801 - type: map_at_100 value: 40.549 - type: map_at_1000 value: 40.802 - type: map_at_3 value: 35.05 - type: map_at_5 value: 36.884 - type: mrr_at_1 value: 33.004 - type: mrr_at_10 value: 43.864 - type: mrr_at_100 value: 44.667 - type: mrr_at_1000 value: 44.717 - type: mrr_at_3 value: 40.777 - type: mrr_at_5 value: 42.319 - type: ndcg_at_1 value: 33.004 - type: ndcg_at_10 value: 46.022 - type: ndcg_at_100 value: 51.542 - type: ndcg_at_1000 value: 53.742000000000004 - type: ndcg_at_3 value: 39.795 - type: ndcg_at_5 value: 42.272 - type: precision_at_1 value: 33.004 - type: precision_at_10 value: 9.012 - type: precision_at_100 value: 1.7770000000000001 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 19.038 - type: precision_at_5 value: 13.675999999999998 - type: recall_at_1 value: 27.139999999999997 - type: recall_at_10 value: 60.961 - type: recall_at_100 value: 84.451 - type: recall_at_1000 value: 98.113 - type: recall_at_3 value: 43.001 - type: recall_at_5 value: 49.896 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 22.076999999999998 - type: map_at_10 value: 35.44 - type: map_at_100 value: 37.651 - type: map_at_1000 value: 37.824999999999996 - type: map_at_3 value: 30.764999999999997 - type: map_at_5 value: 33.26 - type: mrr_at_1 value: 50.163000000000004 - type: mrr_at_10 value: 61.207 - type: mrr_at_100 value: 61.675000000000004 - type: mrr_at_1000 value: 61.692 - type: mrr_at_3 value: 58.60999999999999 - type: mrr_at_5 value: 60.307 - type: ndcg_at_1 value: 50.163000000000004 - type: ndcg_at_10 value: 45.882 - type: ndcg_at_100 value: 53.239999999999995 - type: ndcg_at_1000 value: 55.852000000000004 - type: ndcg_at_3 value: 40.514 - type: ndcg_at_5 value: 42.038 - type: precision_at_1 value: 50.163000000000004 - type: precision_at_10 value: 13.466000000000001 - type: precision_at_100 value: 2.164 - type: precision_at_1000 value: 0.266 - type: precision_at_3 value: 29.707 - type: precision_at_5 value: 21.694 - type: recall_at_1 value: 22.076999999999998 - type: recall_at_10 value: 50.193 - type: recall_at_100 value: 74.993 - type: recall_at_1000 value: 89.131 - type: recall_at_3 value: 35.472 - type: recall_at_5 value: 41.814 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.953 - type: map_at_10 value: 24.515 - type: map_at_100 value: 36.173 - type: map_at_1000 value: 38.351 - type: map_at_3 value: 16.592000000000002 - type: map_at_5 value: 20.036 - type: mrr_at_1 value: 74.25 - type: mrr_at_10 value: 81.813 - type: mrr_at_100 value: 82.006 - type: mrr_at_1000 value: 82.011 - type: mrr_at_3 value: 80.875 - type: mrr_at_5 value: 81.362 - type: ndcg_at_1 value: 62.5 - type: ndcg_at_10 value: 52.42 - type: ndcg_at_100 value: 56.808 - type: ndcg_at_1000 value: 63.532999999999994 - type: ndcg_at_3 value: 56.654 - type: ndcg_at_5 value: 54.18300000000001 - type: precision_at_1 value: 74.25 - type: precision_at_10 value: 42.699999999999996 - type: precision_at_100 value: 13.675 - type: precision_at_1000 value: 2.664 - type: precision_at_3 value: 60.5 - type: precision_at_5 value: 52.800000000000004 - type: recall_at_1 value: 9.953 - type: recall_at_10 value: 30.253999999999998 - type: recall_at_100 value: 62.516000000000005 - type: recall_at_1000 value: 84.163 - type: recall_at_3 value: 18.13 - type: recall_at_5 value: 22.771 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 79.455 - type: f1 value: 74.16798697647569 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 87.531 - type: map_at_10 value: 93.16799999999999 - type: map_at_100 value: 93.341 - type: map_at_1000 value: 93.349 - type: map_at_3 value: 92.444 - type: map_at_5 value: 92.865 - type: mrr_at_1 value: 94.014 - type: mrr_at_10 value: 96.761 - type: mrr_at_100 value: 96.762 - type: mrr_at_1000 value: 96.762 - type: mrr_at_3 value: 96.672 - type: mrr_at_5 value: 96.736 - type: ndcg_at_1 value: 94.014 - type: ndcg_at_10 value: 95.112 - type: ndcg_at_100 value: 95.578 - type: ndcg_at_1000 value: 95.68900000000001 - type: ndcg_at_3 value: 94.392 - type: ndcg_at_5 value: 94.72500000000001 - type: precision_at_1 value: 94.014 - type: precision_at_10 value: 11.065 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 35.259 - type: precision_at_5 value: 21.599 - type: recall_at_1 value: 87.531 - type: recall_at_10 value: 97.356 - type: recall_at_100 value: 98.965 - type: recall_at_1000 value: 99.607 - type: recall_at_3 value: 95.312 - type: recall_at_5 value: 96.295 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.055 - type: map_at_10 value: 53.114 - type: map_at_100 value: 55.235 - type: map_at_1000 value: 55.345 - type: map_at_3 value: 45.854 - type: map_at_5 value: 50.025 - type: mrr_at_1 value: 60.34 - type: mrr_at_10 value: 68.804 - type: mrr_at_100 value: 69.309 - type: mrr_at_1000 value: 69.32199999999999 - type: mrr_at_3 value: 66.40899999999999 - type: mrr_at_5 value: 67.976 - type: ndcg_at_1 value: 60.34 - type: ndcg_at_10 value: 62.031000000000006 - type: ndcg_at_100 value: 68.00500000000001 - type: ndcg_at_1000 value: 69.286 - type: ndcg_at_3 value: 56.355999999999995 - type: ndcg_at_5 value: 58.687 - type: precision_at_1 value: 60.34 - type: precision_at_10 value: 17.176 - type: precision_at_100 value: 2.36 - type: precision_at_1000 value: 0.259 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.809 - type: recall_at_1 value: 32.055 - type: recall_at_10 value: 70.91 - type: recall_at_100 value: 91.83 - type: recall_at_1000 value: 98.871 - type: recall_at_3 value: 51.202999999999996 - type: recall_at_5 value: 60.563 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 43.68 - type: map_at_10 value: 64.389 - type: map_at_100 value: 65.24 - type: map_at_1000 value: 65.303 - type: map_at_3 value: 61.309000000000005 - type: map_at_5 value: 63.275999999999996 - type: mrr_at_1 value: 87.36 - type: mrr_at_10 value: 91.12 - type: mrr_at_100 value: 91.227 - type: mrr_at_1000 value: 91.229 - type: mrr_at_3 value: 90.57600000000001 - type: mrr_at_5 value: 90.912 - type: ndcg_at_1 value: 87.36 - type: ndcg_at_10 value: 73.076 - type: ndcg_at_100 value: 75.895 - type: ndcg_at_1000 value: 77.049 - type: ndcg_at_3 value: 68.929 - type: ndcg_at_5 value: 71.28 - type: precision_at_1 value: 87.36 - type: precision_at_10 value: 14.741000000000001 - type: precision_at_100 value: 1.694 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 43.043 - type: precision_at_5 value: 27.681 - type: recall_at_1 value: 43.68 - type: recall_at_10 value: 73.707 - type: recall_at_100 value: 84.7 - type: recall_at_1000 value: 92.309 - type: recall_at_3 value: 64.564 - type: recall_at_5 value: 69.203 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.75399999999999 - type: ap value: 95.29389839242187 - type: f1 value: 96.75348377433475 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 25.176 - type: map_at_10 value: 38.598 - type: map_at_100 value: 39.707 - type: map_at_1000 value: 39.744 - type: map_at_3 value: 34.566 - type: map_at_5 value: 36.863 - type: mrr_at_1 value: 25.874000000000002 - type: mrr_at_10 value: 39.214 - type: mrr_at_100 value: 40.251 - type: mrr_at_1000 value: 40.281 - type: mrr_at_3 value: 35.291 - type: mrr_at_5 value: 37.545 - type: ndcg_at_1 value: 25.874000000000002 - type: ndcg_at_10 value: 45.98 - type: ndcg_at_100 value: 51.197 - type: ndcg_at_1000 value: 52.073 - type: ndcg_at_3 value: 37.785999999999994 - type: ndcg_at_5 value: 41.870000000000005 - type: precision_at_1 value: 25.874000000000002 - type: precision_at_10 value: 7.181 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 16.051000000000002 - type: precision_at_5 value: 11.713 - type: recall_at_1 value: 25.176 - type: recall_at_10 value: 68.67699999999999 - type: recall_at_100 value: 92.55 - type: recall_at_1000 value: 99.164 - type: recall_at_3 value: 46.372 - type: recall_at_5 value: 56.16 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.03784769721841 - type: f1 value: 98.97791641821495 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 91.88326493388054 - type: f1 value: 73.74809928034335 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 85.41358439811701 - type: f1 value: 83.503679460639 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 89.77135171486215 - type: f1 value: 88.89843747468366 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 46.22695362087359 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 44.132372165849425 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.35680810650402 - type: mrr value: 34.72625715637218 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 7.165000000000001 - type: map_at_10 value: 15.424 - type: map_at_100 value: 20.28 - type: map_at_1000 value: 22.065 - type: map_at_3 value: 11.236 - type: map_at_5 value: 13.025999999999998 - type: mrr_at_1 value: 51.702999999999996 - type: mrr_at_10 value: 59.965 - type: mrr_at_100 value: 60.667 - type: mrr_at_1000 value: 60.702999999999996 - type: mrr_at_3 value: 58.772000000000006 - type: mrr_at_5 value: 59.267 - type: ndcg_at_1 value: 49.536 - type: ndcg_at_10 value: 40.6 - type: ndcg_at_100 value: 37.848 - type: ndcg_at_1000 value: 46.657 - type: ndcg_at_3 value: 46.117999999999995 - type: ndcg_at_5 value: 43.619 - type: precision_at_1 value: 51.393 - type: precision_at_10 value: 30.31 - type: precision_at_100 value: 9.972 - type: precision_at_1000 value: 2.329 - type: precision_at_3 value: 43.137 - type: precision_at_5 value: 37.585 - type: recall_at_1 value: 7.165000000000001 - type: recall_at_10 value: 19.689999999999998 - type: recall_at_100 value: 39.237 - type: recall_at_1000 value: 71.417 - type: recall_at_3 value: 12.247 - type: recall_at_5 value: 14.902999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 42.653999999999996 - type: map_at_10 value: 59.611999999999995 - type: map_at_100 value: 60.32300000000001 - type: map_at_1000 value: 60.336 - type: map_at_3 value: 55.584999999999994 - type: map_at_5 value: 58.19 - type: mrr_at_1 value: 47.683 - type: mrr_at_10 value: 62.06700000000001 - type: mrr_at_100 value: 62.537 - type: mrr_at_1000 value: 62.544999999999995 - type: mrr_at_3 value: 59.178 - type: mrr_at_5 value: 61.034 - type: ndcg_at_1 value: 47.654 - type: ndcg_at_10 value: 67.001 - type: ndcg_at_100 value: 69.73899999999999 - type: ndcg_at_1000 value: 69.986 - type: ndcg_at_3 value: 59.95700000000001 - type: ndcg_at_5 value: 64.025 - type: precision_at_1 value: 47.654 - type: precision_at_10 value: 10.367999999999999 - type: precision_at_100 value: 1.192 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 26.651000000000003 - type: precision_at_5 value: 18.459 - type: recall_at_1 value: 42.653999999999996 - type: recall_at_10 value: 86.619 - type: recall_at_100 value: 98.04899999999999 - type: recall_at_1000 value: 99.812 - type: recall_at_3 value: 68.987 - type: recall_at_5 value: 78.158 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: None metrics: - type: map_at_1 value: 72.538 - type: map_at_10 value: 86.702 - type: map_at_100 value: 87.31 - type: map_at_1000 value: 87.323 - type: map_at_3 value: 83.87 - type: map_at_5 value: 85.682 - type: mrr_at_1 value: 83.31 - type: mrr_at_10 value: 89.225 - type: mrr_at_100 value: 89.30399999999999 - type: mrr_at_1000 value: 89.30399999999999 - type: mrr_at_3 value: 88.44300000000001 - type: mrr_at_5 value: 89.005 - type: ndcg_at_1 value: 83.32000000000001 - type: ndcg_at_10 value: 90.095 - type: ndcg_at_100 value: 91.12 - type: ndcg_at_1000 value: 91.179 - type: ndcg_at_3 value: 87.606 - type: ndcg_at_5 value: 89.031 - type: precision_at_1 value: 83.32000000000001 - type: precision_at_10 value: 13.641 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.377 - type: precision_at_5 value: 25.162000000000003 - type: recall_at_1 value: 72.538 - type: recall_at_10 value: 96.47200000000001 - type: recall_at_100 value: 99.785 - type: recall_at_1000 value: 99.99900000000001 - type: recall_at_3 value: 89.278 - type: recall_at_5 value: 93.367 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 73.55219145406065 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 74.13437105242755 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 6.873 - type: map_at_10 value: 17.944 - type: map_at_100 value: 21.171 - type: map_at_1000 value: 21.528 - type: map_at_3 value: 12.415 - type: map_at_5 value: 15.187999999999999 - type: mrr_at_1 value: 33.800000000000004 - type: mrr_at_10 value: 46.455 - type: mrr_at_100 value: 47.378 - type: mrr_at_1000 value: 47.394999999999996 - type: mrr_at_3 value: 42.367 - type: mrr_at_5 value: 44.972 - type: ndcg_at_1 value: 33.800000000000004 - type: ndcg_at_10 value: 28.907 - type: ndcg_at_100 value: 39.695 - type: ndcg_at_1000 value: 44.582 - type: ndcg_at_3 value: 26.949 - type: ndcg_at_5 value: 23.988 - type: precision_at_1 value: 33.800000000000004 - type: precision_at_10 value: 15.079999999999998 - type: precision_at_100 value: 3.056 - type: precision_at_1000 value: 0.42100000000000004 - type: precision_at_3 value: 25.167 - type: precision_at_5 value: 21.26 - type: recall_at_1 value: 6.873 - type: recall_at_10 value: 30.568 - type: recall_at_100 value: 62.062 - type: recall_at_1000 value: 85.37700000000001 - type: recall_at_3 value: 15.312999999999999 - type: recall_at_5 value: 21.575 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.37009118256057 - type: cos_sim_spearman value: 79.27986395671529 - type: euclidean_pearson value: 79.18037715442115 - type: euclidean_spearman value: 79.28004791561621 - type: manhattan_pearson value: 79.34062972800541 - type: manhattan_spearman value: 79.43106695543402 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.48474767383833 - type: cos_sim_spearman value: 79.54505388752513 - type: euclidean_pearson value: 83.43282704179565 - type: euclidean_spearman value: 79.54579919925405 - type: manhattan_pearson value: 83.77564492427952 - type: manhattan_spearman value: 79.84558396989286 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.803698035802 - type: cos_sim_spearman value: 88.83451367754881 - type: euclidean_pearson value: 88.28939285711628 - type: euclidean_spearman value: 88.83528996073112 - type: manhattan_pearson value: 88.28017412671795 - type: manhattan_spearman value: 88.9228828016344 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.27469288153428 - type: cos_sim_spearman value: 83.87477064876288 - type: euclidean_pearson value: 84.2601737035379 - type: euclidean_spearman value: 83.87431082479074 - type: manhattan_pearson value: 84.3621547772745 - type: manhattan_spearman value: 84.12094375000423 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.12749863201587 - type: cos_sim_spearman value: 88.54287568368565 - type: euclidean_pearson value: 87.90429700607999 - type: euclidean_spearman value: 88.5437689576261 - type: manhattan_pearson value: 88.19276653356833 - type: manhattan_spearman value: 88.99995393814679 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.68398747560902 - type: cos_sim_spearman value: 86.48815303460574 - type: euclidean_pearson value: 85.52356631237954 - type: euclidean_spearman value: 86.486391949551 - type: manhattan_pearson value: 85.67267981761788 - type: manhattan_spearman value: 86.7073696332485 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.9057107443124 - type: cos_sim_spearman value: 88.7312168757697 - type: euclidean_pearson value: 88.72810439714794 - type: euclidean_spearman value: 88.71976185854771 - type: manhattan_pearson value: 88.50433745949111 - type: manhattan_spearman value: 88.51726175544195 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 67.59391795109886 - type: cos_sim_spearman value: 66.87613008631367 - type: euclidean_pearson value: 69.23198488262217 - type: euclidean_spearman value: 66.85427723013692 - type: manhattan_pearson value: 69.50730124841084 - type: manhattan_spearman value: 67.10404669820792 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.0820605344619 - type: cos_sim_spearman value: 86.8518089863434 - type: euclidean_pearson value: 86.31087134689284 - type: euclidean_spearman value: 86.8518520517941 - type: manhattan_pearson value: 86.47203796160612 - type: manhattan_spearman value: 87.1080149734421 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 89.09255369305481 - type: mrr value: 97.10323445617563 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 61.260999999999996 - type: map_at_10 value: 74.043 - type: map_at_100 value: 74.37700000000001 - type: map_at_1000 value: 74.384 - type: map_at_3 value: 71.222 - type: map_at_5 value: 72.875 - type: mrr_at_1 value: 64.333 - type: mrr_at_10 value: 74.984 - type: mrr_at_100 value: 75.247 - type: mrr_at_1000 value: 75.25500000000001 - type: mrr_at_3 value: 73.167 - type: mrr_at_5 value: 74.35000000000001 - type: ndcg_at_1 value: 64.333 - type: ndcg_at_10 value: 79.06 - type: ndcg_at_100 value: 80.416 - type: ndcg_at_1000 value: 80.55600000000001 - type: ndcg_at_3 value: 74.753 - type: ndcg_at_5 value: 76.97500000000001 - type: precision_at_1 value: 64.333 - type: precision_at_10 value: 10.567 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 29.889 - type: precision_at_5 value: 19.533 - type: recall_at_1 value: 61.260999999999996 - type: recall_at_10 value: 93.167 - type: recall_at_100 value: 99.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 81.667 - type: recall_at_5 value: 87.394 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.71980198019801 - type: cos_sim_ap value: 92.81616007802704 - type: cos_sim_f1 value: 85.17548454688318 - type: cos_sim_precision value: 89.43894389438944 - type: cos_sim_recall value: 81.3 - type: dot_accuracy value: 99.71980198019801 - type: dot_ap value: 92.81398760591358 - type: dot_f1 value: 85.17548454688318 - type: dot_precision value: 89.43894389438944 - type: dot_recall value: 81.3 - type: euclidean_accuracy value: 99.71980198019801 - type: euclidean_ap value: 92.81560637245072 - type: euclidean_f1 value: 85.17548454688318 - type: euclidean_precision value: 89.43894389438944 - type: euclidean_recall value: 81.3 - type: manhattan_accuracy value: 99.73069306930694 - type: manhattan_ap value: 93.14005487480794 - type: manhattan_f1 value: 85.56263269639068 - type: manhattan_precision value: 91.17647058823529 - type: manhattan_recall value: 80.60000000000001 - type: max_accuracy value: 99.73069306930694 - type: max_ap value: 93.14005487480794 - type: max_f1 value: 85.56263269639068 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 79.86443362395185 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 49.40897096662564 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.66040806627947 - type: mrr value: 56.58670475766064 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.51015090598575 - type: cos_sim_spearman value: 31.35016454939226 - type: dot_pearson value: 31.5150068731 - type: dot_spearman value: 31.34790869023487 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.254 - type: map_at_10 value: 2.064 - type: map_at_100 value: 12.909 - type: map_at_1000 value: 31.761 - type: map_at_3 value: 0.738 - type: map_at_5 value: 1.155 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: ndcg_at_1 value: 93.0 - type: ndcg_at_10 value: 82.258 - type: ndcg_at_100 value: 64.34 - type: ndcg_at_1000 value: 57.912 - type: ndcg_at_3 value: 90.827 - type: ndcg_at_5 value: 86.79 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 84.8 - type: precision_at_100 value: 66.0 - type: precision_at_1000 value: 25.356 - type: precision_at_3 value: 94.667 - type: precision_at_5 value: 90.4 - type: recall_at_1 value: 0.254 - type: recall_at_10 value: 2.1950000000000003 - type: recall_at_100 value: 16.088 - type: recall_at_1000 value: 54.559000000000005 - type: recall_at_3 value: 0.75 - type: recall_at_5 value: 1.191 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.976 - type: map_at_10 value: 11.389000000000001 - type: map_at_100 value: 18.429000000000002 - type: map_at_1000 value: 20.113 - type: map_at_3 value: 6.483 - type: map_at_5 value: 8.770999999999999 - type: mrr_at_1 value: 40.816 - type: mrr_at_10 value: 58.118 - type: mrr_at_100 value: 58.489999999999995 - type: mrr_at_1000 value: 58.489999999999995 - type: mrr_at_3 value: 53.061 - type: mrr_at_5 value: 57.041 - type: ndcg_at_1 value: 40.816 - type: ndcg_at_10 value: 30.567 - type: ndcg_at_100 value: 42.44 - type: ndcg_at_1000 value: 53.480000000000004 - type: ndcg_at_3 value: 36.016 - type: ndcg_at_5 value: 34.257 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 25.714 - type: precision_at_100 value: 8.429 - type: precision_at_1000 value: 1.5939999999999999 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 33.878 - type: recall_at_1 value: 2.976 - type: recall_at_10 value: 17.854999999999997 - type: recall_at_100 value: 51.833 - type: recall_at_1000 value: 86.223 - type: recall_at_3 value: 7.887 - type: recall_at_5 value: 12.026 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 85.1174 - type: ap value: 30.169441069345748 - type: f1 value: 69.79254701873245 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 72.58347481607245 - type: f1 value: 72.74877295564937 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 53.90586138221305 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.35769207844072 - type: cos_sim_ap value: 77.9645072410354 - type: cos_sim_f1 value: 71.32352941176471 - type: cos_sim_precision value: 66.5903890160183 - type: cos_sim_recall value: 76.78100263852242 - type: dot_accuracy value: 87.37557370209214 - type: dot_ap value: 77.96250046429908 - type: dot_f1 value: 71.28932757557064 - type: dot_precision value: 66.95249130938586 - type: dot_recall value: 76.22691292875989 - type: euclidean_accuracy value: 87.35173153722357 - type: euclidean_ap value: 77.96520460741593 - type: euclidean_f1 value: 71.32470733210104 - type: euclidean_precision value: 66.91329479768785 - type: euclidean_recall value: 76.35883905013192 - type: manhattan_accuracy value: 87.25636287774931 - type: manhattan_ap value: 77.77752485611796 - type: manhattan_f1 value: 71.18148599269183 - type: manhattan_precision value: 66.10859728506787 - type: manhattan_recall value: 77.0976253298153 - type: max_accuracy value: 87.37557370209214 - type: max_ap value: 77.96520460741593 - type: max_f1 value: 71.32470733210104 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.38176737687739 - type: cos_sim_ap value: 86.58811861657401 - type: cos_sim_f1 value: 79.09430644097604 - type: cos_sim_precision value: 75.45085977911366 - type: cos_sim_recall value: 83.10748383122882 - type: dot_accuracy value: 89.38370784336554 - type: dot_ap value: 86.58840606004333 - type: dot_f1 value: 79.10179860068133 - type: dot_precision value: 75.44546153308643 - type: dot_recall value: 83.13058207576223 - type: euclidean_accuracy value: 89.38564830985369 - type: euclidean_ap value: 86.58820721061164 - type: euclidean_f1 value: 79.09070942235888 - type: euclidean_precision value: 75.38729937194697 - type: euclidean_recall value: 83.17677856482906 - type: manhattan_accuracy value: 89.40699344122326 - type: manhattan_ap value: 86.60631843011362 - type: manhattan_f1 value: 79.14949970570925 - type: manhattan_precision value: 75.78191039729502 - type: manhattan_recall value: 82.83030489682784 - type: max_accuracy value: 89.40699344122326 - type: max_ap value: 86.60631843011362 - type: max_f1 value: 79.14949970570925 - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 65.58442135663871 - type: cos_sim_spearman value: 72.2538631361313 - type: euclidean_pearson value: 70.97255486607429 - type: euclidean_spearman value: 72.25374250228647 - type: manhattan_pearson value: 70.83250199989911 - type: manhattan_spearman value: 72.14819496536272 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 59.99478404929932 - type: cos_sim_spearman value: 62.61836216999812 - type: euclidean_pearson value: 66.86429811933593 - type: euclidean_spearman value: 62.6183520374191 - type: manhattan_pearson value: 66.8063778911633 - type: manhattan_spearman value: 62.569607573241115 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 53.98400000000001 - type: f1 value: 51.21447361350723 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 79.11941660686553 - type: cos_sim_spearman value: 81.25029594540435 - type: euclidean_pearson value: 82.06973504238826 - type: euclidean_spearman value: 81.2501989488524 - type: manhattan_pearson value: 82.10094630392753 - type: manhattan_spearman value: 81.27987244392389 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 47.07270168705156 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 45.98511703185043 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 88.19895157194931 - type: mrr value: 90.21424603174603 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 88.03317320980119 - type: mrr value: 89.9461507936508 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 29.037000000000003 - type: map_at_10 value: 42.001 - type: map_at_100 value: 43.773 - type: map_at_1000 value: 43.878 - type: map_at_3 value: 37.637 - type: map_at_5 value: 40.034 - type: mrr_at_1 value: 43.136 - type: mrr_at_10 value: 51.158 - type: mrr_at_100 value: 52.083 - type: mrr_at_1000 value: 52.12 - type: mrr_at_3 value: 48.733 - type: mrr_at_5 value: 50.025 - type: ndcg_at_1 value: 43.136 - type: ndcg_at_10 value: 48.685 - type: ndcg_at_100 value: 55.513 - type: ndcg_at_1000 value: 57.242000000000004 - type: ndcg_at_3 value: 43.329 - type: ndcg_at_5 value: 45.438 - type: precision_at_1 value: 43.136 - type: precision_at_10 value: 10.56 - type: precision_at_100 value: 1.6129999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 24.064 - type: precision_at_5 value: 17.269000000000002 - type: recall_at_1 value: 29.037000000000003 - type: recall_at_10 value: 59.245000000000005 - type: recall_at_100 value: 87.355 - type: recall_at_1000 value: 98.74000000000001 - type: recall_at_3 value: 42.99 - type: recall_at_5 value: 49.681999999999995 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 82.68190018039687 - type: cos_sim_ap value: 90.18017125327886 - type: cos_sim_f1 value: 83.64080906868193 - type: cos_sim_precision value: 79.7076890489303 - type: cos_sim_recall value: 87.98223053542202 - type: dot_accuracy value: 82.68190018039687 - type: dot_ap value: 90.18782350103646 - type: dot_f1 value: 83.64242087729039 - type: dot_precision value: 79.65313028764805 - type: dot_recall value: 88.05237315875614 - type: euclidean_accuracy value: 82.68190018039687 - type: euclidean_ap value: 90.1801957900632 - type: euclidean_f1 value: 83.63636363636364 - type: euclidean_precision value: 79.52772506852203 - type: euclidean_recall value: 88.19265840542437 - type: manhattan_accuracy value: 82.14070956103427 - type: manhattan_ap value: 89.96178420101427 - type: manhattan_f1 value: 83.21087838578791 - type: manhattan_precision value: 78.35605121850475 - type: manhattan_recall value: 88.70703764320785 - type: max_accuracy value: 82.68190018039687 - type: max_ap value: 90.18782350103646 - type: max_f1 value: 83.64242087729039 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 72.234 - type: map_at_10 value: 80.10000000000001 - type: map_at_100 value: 80.36 - type: map_at_1000 value: 80.363 - type: map_at_3 value: 78.315 - type: map_at_5 value: 79.607 - type: mrr_at_1 value: 72.392 - type: mrr_at_10 value: 80.117 - type: mrr_at_100 value: 80.36999999999999 - type: mrr_at_1000 value: 80.373 - type: mrr_at_3 value: 78.469 - type: mrr_at_5 value: 79.633 - type: ndcg_at_1 value: 72.392 - type: ndcg_at_10 value: 83.651 - type: ndcg_at_100 value: 84.749 - type: ndcg_at_1000 value: 84.83000000000001 - type: ndcg_at_3 value: 80.253 - type: ndcg_at_5 value: 82.485 - type: precision_at_1 value: 72.392 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.004 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 28.732000000000003 - type: precision_at_5 value: 18.377 - type: recall_at_1 value: 72.234 - type: recall_at_10 value: 94.573 - type: recall_at_100 value: 99.368 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 85.669 - type: recall_at_5 value: 91.01700000000001 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 26.173999999999996 - type: map_at_10 value: 80.04 - type: map_at_100 value: 82.94500000000001 - type: map_at_1000 value: 82.98100000000001 - type: map_at_3 value: 55.562999999999995 - type: map_at_5 value: 69.89800000000001 - type: mrr_at_1 value: 89.5 - type: mrr_at_10 value: 92.996 - type: mrr_at_100 value: 93.06400000000001 - type: mrr_at_1000 value: 93.065 - type: mrr_at_3 value: 92.658 - type: mrr_at_5 value: 92.84599999999999 - type: ndcg_at_1 value: 89.5 - type: ndcg_at_10 value: 87.443 - type: ndcg_at_100 value: 90.253 - type: ndcg_at_1000 value: 90.549 - type: ndcg_at_3 value: 85.874 - type: ndcg_at_5 value: 84.842 - type: precision_at_1 value: 89.5 - type: precision_at_10 value: 41.805 - type: precision_at_100 value: 4.827 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 76.85 - type: precision_at_5 value: 64.8 - type: recall_at_1 value: 26.173999999999996 - type: recall_at_10 value: 89.101 - type: recall_at_100 value: 98.08099999999999 - type: recall_at_1000 value: 99.529 - type: recall_at_3 value: 57.902 - type: recall_at_5 value: 74.602 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 56.10000000000001 - type: map_at_10 value: 66.15299999999999 - type: map_at_100 value: 66.625 - type: map_at_1000 value: 66.636 - type: map_at_3 value: 63.632999999999996 - type: map_at_5 value: 65.293 - type: mrr_at_1 value: 56.10000000000001 - type: mrr_at_10 value: 66.15299999999999 - type: mrr_at_100 value: 66.625 - type: mrr_at_1000 value: 66.636 - type: mrr_at_3 value: 63.632999999999996 - type: mrr_at_5 value: 65.293 - type: ndcg_at_1 value: 56.10000000000001 - type: ndcg_at_10 value: 71.146 - type: ndcg_at_100 value: 73.27799999999999 - type: ndcg_at_1000 value: 73.529 - type: ndcg_at_3 value: 66.09 - type: ndcg_at_5 value: 69.08999999999999 - type: precision_at_1 value: 56.10000000000001 - type: precision_at_10 value: 8.68 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 24.4 - type: precision_at_5 value: 16.1 - type: recall_at_1 value: 56.10000000000001 - type: recall_at_10 value: 86.8 - type: recall_at_100 value: 96.39999999999999 - type: recall_at_1000 value: 98.3 - type: recall_at_3 value: 73.2 - type: recall_at_5 value: 80.5 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 54.52096960369373 - type: f1 value: 40.930845295808695 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 86.51031894934334 - type: ap value: 55.9516014323483 - type: f1 value: 81.54813679326381 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 69.67437838574276 - type: cos_sim_spearman value: 73.81314174653045 - type: euclidean_pearson value: 72.63430276680275 - type: euclidean_spearman value: 73.81358736777001 - type: manhattan_pearson value: 72.58743833842829 - type: manhattan_spearman value: 73.7590419009179 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: None metrics: - type: map value: 31.648613483640254 - type: mrr value: 30.37420634920635 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 73.28099999999999 - type: map_at_10 value: 81.977 - type: map_at_100 value: 82.222 - type: map_at_1000 value: 82.22699999999999 - type: map_at_3 value: 80.441 - type: map_at_5 value: 81.46600000000001 - type: mrr_at_1 value: 75.673 - type: mrr_at_10 value: 82.41000000000001 - type: mrr_at_100 value: 82.616 - type: mrr_at_1000 value: 82.621 - type: mrr_at_3 value: 81.094 - type: mrr_at_5 value: 81.962 - type: ndcg_at_1 value: 75.673 - type: ndcg_at_10 value: 85.15599999999999 - type: ndcg_at_100 value: 86.151 - type: ndcg_at_1000 value: 86.26899999999999 - type: ndcg_at_3 value: 82.304 - type: ndcg_at_5 value: 84.009 - type: precision_at_1 value: 75.673 - type: precision_at_10 value: 10.042 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 30.673000000000002 - type: precision_at_5 value: 19.326999999999998 - type: recall_at_1 value: 73.28099999999999 - type: recall_at_10 value: 94.446 - type: recall_at_100 value: 98.737 - type: recall_at_1000 value: 99.649 - type: recall_at_3 value: 86.984 - type: recall_at_5 value: 91.024 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.08607935440484 - type: f1 value: 78.24879986066307 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.05917955615332 - type: f1 value: 85.05279279434997 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 56.2 - type: map_at_10 value: 62.57899999999999 - type: map_at_100 value: 63.154999999999994 - type: map_at_1000 value: 63.193 - type: map_at_3 value: 61.217 - type: map_at_5 value: 62.012 - type: mrr_at_1 value: 56.3 - type: mrr_at_10 value: 62.629000000000005 - type: mrr_at_100 value: 63.205999999999996 - type: mrr_at_1000 value: 63.244 - type: mrr_at_3 value: 61.267 - type: mrr_at_5 value: 62.062 - type: ndcg_at_1 value: 56.2 - type: ndcg_at_10 value: 65.592 - type: ndcg_at_100 value: 68.657 - type: ndcg_at_1000 value: 69.671 - type: ndcg_at_3 value: 62.808 - type: ndcg_at_5 value: 64.24499999999999 - type: precision_at_1 value: 56.2 - type: precision_at_10 value: 7.5 - type: precision_at_100 value: 0.899 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 22.467000000000002 - type: precision_at_5 value: 14.180000000000001 - type: recall_at_1 value: 56.2 - type: recall_at_10 value: 75.0 - type: recall_at_100 value: 89.9 - type: recall_at_1000 value: 97.89999999999999 - type: recall_at_3 value: 67.4 - type: recall_at_5 value: 70.89999999999999 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 76.87666666666667 - type: f1 value: 76.7317686219665 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 79.64266377910124 - type: cos_sim_ap value: 84.78274442344829 - type: cos_sim_f1 value: 81.16947472745292 - type: cos_sim_precision value: 76.47058823529412 - type: cos_sim_recall value: 86.48363252375924 - type: dot_accuracy value: 79.64266377910124 - type: dot_ap value: 84.7851404063692 - type: dot_f1 value: 81.16947472745292 - type: dot_precision value: 76.47058823529412 - type: dot_recall value: 86.48363252375924 - type: euclidean_accuracy value: 79.64266377910124 - type: euclidean_ap value: 84.78068373762378 - type: euclidean_f1 value: 81.14794656110837 - type: euclidean_precision value: 76.35009310986965 - type: euclidean_recall value: 86.58922914466737 - type: manhattan_accuracy value: 79.48023822414727 - type: manhattan_ap value: 84.72928897427576 - type: manhattan_f1 value: 81.32084770823064 - type: manhattan_precision value: 76.24768946395564 - type: manhattan_recall value: 87.11721224920802 - type: max_accuracy value: 79.64266377910124 - type: max_ap value: 84.7851404063692 - type: max_f1 value: 81.32084770823064 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 94.3 - type: ap value: 92.8664032274438 - type: f1 value: 94.29311102997727 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 48.51392279882909 - type: cos_sim_spearman value: 54.06338895994974 - type: euclidean_pearson value: 52.58480559573412 - type: euclidean_spearman value: 54.06417276612201 - type: manhattan_pearson value: 52.69525121721343 - type: manhattan_spearman value: 54.048147455389675 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 29.728387290757325 - type: cos_sim_spearman value: 31.366121633635284 - type: euclidean_pearson value: 29.14588368552961 - type: euclidean_spearman value: 31.36764411112844 - type: manhattan_pearson value: 29.63517350523121 - type: manhattan_spearman value: 31.94157020583762 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 63.64868296271406 - type: cos_sim_spearman value: 66.12800618164744 - type: euclidean_pearson value: 63.21405767340238 - type: euclidean_spearman value: 66.12786567790748 - type: manhattan_pearson value: 64.04300276525848 - type: manhattan_spearman value: 66.5066857145652 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 81.2302623912794 - type: cos_sim_spearman value: 81.16833673266562 - type: euclidean_pearson value: 79.47647843876024 - type: euclidean_spearman value: 81.16944349524972 - type: manhattan_pearson value: 79.84947238492208 - type: manhattan_spearman value: 81.64626599410026 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 67.80129586475687 - type: mrr value: 77.77402311635554 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 28.666999999999998 - type: map_at_10 value: 81.063 - type: map_at_100 value: 84.504 - type: map_at_1000 value: 84.552 - type: map_at_3 value: 56.897 - type: map_at_5 value: 70.073 - type: mrr_at_1 value: 92.087 - type: mrr_at_10 value: 94.132 - type: mrr_at_100 value: 94.19800000000001 - type: mrr_at_1000 value: 94.19999999999999 - type: mrr_at_3 value: 93.78999999999999 - type: mrr_at_5 value: 94.002 - type: ndcg_at_1 value: 92.087 - type: ndcg_at_10 value: 87.734 - type: ndcg_at_100 value: 90.736 - type: ndcg_at_1000 value: 91.184 - type: ndcg_at_3 value: 88.78 - type: ndcg_at_5 value: 87.676 - type: precision_at_1 value: 92.087 - type: precision_at_10 value: 43.46 - type: precision_at_100 value: 5.07 - type: precision_at_1000 value: 0.518 - type: precision_at_3 value: 77.49000000000001 - type: precision_at_5 value: 65.194 - type: recall_at_1 value: 28.666999999999998 - type: recall_at_10 value: 86.632 - type: recall_at_100 value: 96.646 - type: recall_at_1000 value: 98.917 - type: recall_at_3 value: 58.333999999999996 - type: recall_at_5 value: 72.974 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 52.971999999999994 - type: f1 value: 50.2898280984929 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 86.0797948663824 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 85.10759092255017 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 65.60000000000001 - type: map_at_10 value: 74.773 - type: map_at_100 value: 75.128 - type: map_at_1000 value: 75.136 - type: map_at_3 value: 73.05 - type: map_at_5 value: 74.13499999999999 - type: mrr_at_1 value: 65.60000000000001 - type: mrr_at_10 value: 74.773 - type: mrr_at_100 value: 75.128 - type: mrr_at_1000 value: 75.136 - type: mrr_at_3 value: 73.05 - type: mrr_at_5 value: 74.13499999999999 - type: ndcg_at_1 value: 65.60000000000001 - type: ndcg_at_10 value: 78.84299999999999 - type: ndcg_at_100 value: 80.40899999999999 - type: ndcg_at_1000 value: 80.57 - type: ndcg_at_3 value: 75.40599999999999 - type: ndcg_at_5 value: 77.351 - type: precision_at_1 value: 65.60000000000001 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 27.400000000000002 - type: precision_at_5 value: 17.380000000000003 - type: recall_at_1 value: 65.60000000000001 - type: recall_at_10 value: 91.4 - type: recall_at_100 value: 98.4 - type: recall_at_1000 value: 99.6 - type: recall_at_3 value: 82.19999999999999 - type: recall_at_5 value: 86.9 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 89.47 - type: ap value: 75.59561751845389 - type: f1 value: 87.95207751382563 - task: type: Clustering dataset: name: MTEB AlloProfClusteringP2P type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 76.05592323841036 - type: v_measure value: 64.51718058866508 - task: type: Reranking dataset: name: MTEB AlloprofReranking type: lyon-nlp/mteb-fr-reranking-alloprof-s2p config: default split: test revision: 666fdacebe0291776e86f29345663dfaf80a0db9 metrics: - type: map value: 73.08278490943373 - type: mrr value: 74.66561454570449 - task: type: Retrieval dataset: name: MTEB AlloprofRetrieval type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: map_at_1 value: 38.912 - type: map_at_10 value: 52.437999999999995 - type: map_at_100 value: 53.38 - type: map_at_1000 value: 53.427 - type: map_at_3 value: 48.879 - type: map_at_5 value: 50.934000000000005 - type: mrr_at_1 value: 44.085 - type: mrr_at_10 value: 55.337 - type: mrr_at_100 value: 56.016999999999996 - type: mrr_at_1000 value: 56.043 - type: mrr_at_3 value: 52.55499999999999 - type: mrr_at_5 value: 54.20399999999999 - type: ndcg_at_1 value: 44.085 - type: ndcg_at_10 value: 58.876 - type: ndcg_at_100 value: 62.714000000000006 - type: ndcg_at_1000 value: 63.721000000000004 - type: ndcg_at_3 value: 52.444 - type: ndcg_at_5 value: 55.692 - type: precision_at_1 value: 44.085 - type: precision_at_10 value: 9.21 - type: precision_at_100 value: 1.164 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 23.043 - type: precision_at_5 value: 15.898000000000001 - type: recall_at_1 value: 38.912 - type: recall_at_10 value: 75.577 - type: recall_at_100 value: 92.038 - type: recall_at_1000 value: 99.325 - type: recall_at_3 value: 58.592 - type: recall_at_5 value: 66.235 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 55.532000000000004 - type: f1 value: 52.5783943471605 - task: type: Retrieval dataset: name: MTEB BSARDRetrieval type: maastrichtlawtech/bsard config: default split: test revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 metrics: - type: map_at_1 value: 8.108 - type: map_at_10 value: 14.710999999999999 - type: map_at_100 value: 15.891 - type: map_at_1000 value: 15.983 - type: map_at_3 value: 12.237 - type: map_at_5 value: 13.679 - type: mrr_at_1 value: 8.108 - type: mrr_at_10 value: 14.710999999999999 - type: mrr_at_100 value: 15.891 - type: mrr_at_1000 value: 15.983 - type: mrr_at_3 value: 12.237 - type: mrr_at_5 value: 13.679 - type: ndcg_at_1 value: 8.108 - type: ndcg_at_10 value: 18.796 - type: ndcg_at_100 value: 25.098 - type: ndcg_at_1000 value: 27.951999999999998 - type: ndcg_at_3 value: 13.712 - type: ndcg_at_5 value: 16.309 - type: precision_at_1 value: 8.108 - type: precision_at_10 value: 3.198 - type: precision_at_100 value: 0.626 - type: precision_at_1000 value: 0.086 - type: precision_at_3 value: 6.006 - type: precision_at_5 value: 4.865 - type: recall_at_1 value: 8.108 - type: recall_at_10 value: 31.982 - type: recall_at_100 value: 62.613 - type: recall_at_1000 value: 86.036 - type: recall_at_3 value: 18.018 - type: recall_at_5 value: 24.324 - task: type: Clustering dataset: name: MTEB HALClusteringS2S type: lyon-nlp/clustering-hal-s2s config: default split: test revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 metrics: - type: v_measure value: 30.833269778867116 - task: type: Clustering dataset: name: MTEB MLSUMClusteringP2P type: mlsum config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 50.0281928004713 - type: v_measure value: 43.699961510636534 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.68963357344191 - type: f1 value: 96.45175170820961 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 87.46946445349202 - type: f1 value: 65.79860440988624 - task: type: Classification dataset: name: MTEB MasakhaNEWSClassification (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 82.60663507109005 - type: f1 value: 77.20462646604777 - task: type: Clustering dataset: name: MTEB MasakhaNEWSClusteringP2P (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 60.19311264967803 - type: v_measure value: 63.6235764409785 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.65097511768661 - type: f1 value: 78.77796091490924 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.64425016812373 - type: f1 value: 85.4912728670017 - task: type: Retrieval dataset: name: MTEB MintakaRetrieval (fr) type: jinaai/mintakaqa config: fr split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: map_at_1 value: 35.913000000000004 - type: map_at_10 value: 48.147 - type: map_at_100 value: 48.91 - type: map_at_1000 value: 48.949 - type: map_at_3 value: 45.269999999999996 - type: map_at_5 value: 47.115 - type: mrr_at_1 value: 35.913000000000004 - type: mrr_at_10 value: 48.147 - type: mrr_at_100 value: 48.91 - type: mrr_at_1000 value: 48.949 - type: mrr_at_3 value: 45.269999999999996 - type: mrr_at_5 value: 47.115 - type: ndcg_at_1 value: 35.913000000000004 - type: ndcg_at_10 value: 54.03 - type: ndcg_at_100 value: 57.839 - type: ndcg_at_1000 value: 58.925000000000004 - type: ndcg_at_3 value: 48.217999999999996 - type: ndcg_at_5 value: 51.56699999999999 - type: precision_at_1 value: 35.913000000000004 - type: precision_at_10 value: 7.244000000000001 - type: precision_at_100 value: 0.9039999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 18.905 - type: precision_at_5 value: 12.981000000000002 - type: recall_at_1 value: 35.913000000000004 - type: recall_at_10 value: 72.441 - type: recall_at_100 value: 90.41799999999999 - type: recall_at_1000 value: 99.099 - type: recall_at_3 value: 56.716 - type: recall_at_5 value: 64.90599999999999 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (fr) type: GEM/opusparcus config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 99.90069513406156 - type: cos_sim_ap value: 100.0 - type: cos_sim_f1 value: 99.95032290114257 - type: cos_sim_precision value: 100.0 - type: cos_sim_recall value: 99.90069513406156 - type: dot_accuracy value: 99.90069513406156 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95032290114257 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90069513406156 - type: euclidean_accuracy value: 99.90069513406156 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95032290114257 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90069513406156 - type: manhattan_accuracy value: 99.90069513406156 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95032290114257 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90069513406156 - type: max_accuracy value: 99.90069513406156 - type: max_ap value: 100.0 - type: max_f1 value: 99.95032290114257 - task: type: PairClassification dataset: name: MTEB PawsX (fr) type: paws-x config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 75.25 - type: cos_sim_ap value: 80.86376001270014 - type: cos_sim_f1 value: 73.65945437441204 - type: cos_sim_precision value: 64.02289452166802 - type: cos_sim_recall value: 86.71096345514951 - type: dot_accuracy value: 75.25 - type: dot_ap value: 80.93686107633002 - type: dot_f1 value: 73.65945437441204 - type: dot_precision value: 64.02289452166802 - type: dot_recall value: 86.71096345514951 - type: euclidean_accuracy value: 75.25 - type: euclidean_ap value: 80.86379136218862 - type: euclidean_f1 value: 73.65945437441204 - type: euclidean_precision value: 64.02289452166802 - type: euclidean_recall value: 86.71096345514951 - type: manhattan_accuracy value: 75.3 - type: manhattan_ap value: 80.87826606097734 - type: manhattan_f1 value: 73.68421052631581 - type: manhattan_precision value: 64.0 - type: manhattan_recall value: 86.82170542635659 - type: max_accuracy value: 75.3 - type: max_ap value: 80.93686107633002 - type: max_f1 value: 73.68421052631581 - task: type: STS dataset: name: MTEB SICKFr type: Lajavaness/SICK-fr config: default split: test revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a metrics: - type: cos_sim_pearson value: 81.42349425981143 - type: cos_sim_spearman value: 78.90454327031226 - type: euclidean_pearson value: 78.39086497435166 - type: euclidean_spearman value: 78.9046133980509 - type: manhattan_pearson value: 78.63743094286502 - type: manhattan_spearman value: 79.12136348449269 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 81.452697919749 - type: cos_sim_spearman value: 82.58116836039301 - type: euclidean_pearson value: 81.04038478932786 - type: euclidean_spearman value: 82.58116836039301 - type: manhattan_pearson value: 81.37075396187771 - type: manhattan_spearman value: 82.73678231355368 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (fr) type: stsb_multi_mt config: fr split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 85.7419764013806 - type: cos_sim_spearman value: 85.46085808849622 - type: euclidean_pearson value: 83.70449639870063 - type: euclidean_spearman value: 85.46159013076233 - type: manhattan_pearson value: 83.95259510313929 - type: manhattan_spearman value: 85.8029724659458 - task: type: Summarization dataset: name: MTEB SummEvalFr type: lyon-nlp/summarization-summeval-fr-p2p config: default split: test revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 metrics: - type: cos_sim_pearson value: 32.61063271753325 - type: cos_sim_spearman value: 31.454589417353603 - type: dot_pearson value: 32.6106288643431 - type: dot_spearman value: 31.454589417353603 - task: type: Reranking dataset: name: MTEB SyntecReranking type: lyon-nlp/mteb-fr-reranking-syntec-s2p config: default split: test revision: b205c5084a0934ce8af14338bf03feb19499c84d metrics: - type: map value: 84.31666666666666 - type: mrr value: 84.31666666666666 - task: type: Retrieval dataset: name: MTEB SyntecRetrieval type: lyon-nlp/mteb-fr-retrieval-syntec-s2p config: default split: test revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff metrics: - type: map_at_1 value: 63.0 - type: map_at_10 value: 73.471 - type: map_at_100 value: 73.87 - type: map_at_1000 value: 73.87 - type: map_at_3 value: 70.5 - type: map_at_5 value: 73.05 - type: mrr_at_1 value: 63.0 - type: mrr_at_10 value: 73.471 - type: mrr_at_100 value: 73.87 - type: mrr_at_1000 value: 73.87 - type: mrr_at_3 value: 70.5 - type: mrr_at_5 value: 73.05 - type: ndcg_at_1 value: 63.0 - type: ndcg_at_10 value: 78.255 - type: ndcg_at_100 value: 79.88 - type: ndcg_at_1000 value: 79.88 - type: ndcg_at_3 value: 72.702 - type: ndcg_at_5 value: 77.264 - type: precision_at_1 value: 63.0 - type: precision_at_10 value: 9.3 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 26.333000000000002 - type: precision_at_5 value: 18.0 - type: recall_at_1 value: 63.0 - type: recall_at_10 value: 93.0 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 79.0 - type: recall_at_5 value: 90.0 - task: type: Retrieval dataset: name: MTEB XPQARetrieval (fr) type: jinaai/xpqa config: fr split: test revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f metrics: - type: map_at_1 value: 40.338 - type: map_at_10 value: 61.927 - type: map_at_100 value: 63.361999999999995 - type: map_at_1000 value: 63.405 - type: map_at_3 value: 55.479 - type: map_at_5 value: 59.732 - type: mrr_at_1 value: 63.551 - type: mrr_at_10 value: 71.006 - type: mrr_at_100 value: 71.501 - type: mrr_at_1000 value: 71.509 - type: mrr_at_3 value: 69.07 - type: mrr_at_5 value: 70.165 - type: ndcg_at_1 value: 63.551 - type: ndcg_at_10 value: 68.297 - type: ndcg_at_100 value: 73.13199999999999 - type: ndcg_at_1000 value: 73.751 - type: ndcg_at_3 value: 62.999 - type: ndcg_at_5 value: 64.89 - type: precision_at_1 value: 63.551 - type: precision_at_10 value: 15.661 - type: precision_at_100 value: 1.9789999999999999 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 38.273 - type: precision_at_5 value: 27.61 - type: recall_at_1 value: 40.338 - type: recall_at_10 value: 77.267 - type: recall_at_100 value: 95.892 - type: recall_at_1000 value: 99.75500000000001 - type: recall_at_3 value: 60.36 - type: recall_at_5 value: 68.825 - task: type: Clustering dataset: name: MTEB 8TagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: None metrics: - type: v_measure value: 51.36126303874126 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: None metrics: - type: accuracy value: 67.13717693836979 - type: f1 value: 57.27609848003782 - task: type: Retrieval dataset: name: MTEB ArguAna-PL type: clarin-knext/arguana-pl config: default split: test revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 metrics: - type: map_at_1 value: 35.276999999999994 - type: map_at_10 value: 51.086 - type: map_at_100 value: 51.788000000000004 - type: map_at_1000 value: 51.791 - type: map_at_3 value: 46.147 - type: map_at_5 value: 49.078 - type: mrr_at_1 value: 35.917 - type: mrr_at_10 value: 51.315999999999995 - type: mrr_at_100 value: 52.018 - type: mrr_at_1000 value: 52.022 - type: mrr_at_3 value: 46.349000000000004 - type: mrr_at_5 value: 49.297000000000004 - type: ndcg_at_1 value: 35.276999999999994 - type: ndcg_at_10 value: 59.870999999999995 - type: ndcg_at_100 value: 62.590999999999994 - type: ndcg_at_1000 value: 62.661 - type: ndcg_at_3 value: 49.745 - type: ndcg_at_5 value: 55.067 - type: precision_at_1 value: 35.276999999999994 - type: precision_at_10 value: 8.791 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.637 - type: recall_at_1 value: 35.276999999999994 - type: recall_at_10 value: 87.909 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.18599999999999 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: None metrics: - type: accuracy value: 78.03000000000002 - type: ap value: 29.12548553897622 - type: f1 value: 66.54857118886073 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 89.0 - type: cos_sim_ap value: 76.75437826834582 - type: cos_sim_f1 value: 66.4850136239782 - type: cos_sim_precision value: 68.92655367231639 - type: cos_sim_recall value: 64.21052631578948 - type: dot_accuracy value: 89.0 - type: dot_ap value: 76.75437826834582 - type: dot_f1 value: 66.4850136239782 - type: dot_precision value: 68.92655367231639 - type: dot_recall value: 64.21052631578948 - type: euclidean_accuracy value: 89.0 - type: euclidean_ap value: 76.75437826834582 - type: euclidean_f1 value: 66.4850136239782 - type: euclidean_precision value: 68.92655367231639 - type: euclidean_recall value: 64.21052631578948 - type: manhattan_accuracy value: 89.0 - type: manhattan_ap value: 76.66074220647083 - type: manhattan_f1 value: 66.47058823529412 - type: manhattan_precision value: 75.33333333333333 - type: manhattan_recall value: 59.473684210526315 - type: max_accuracy value: 89.0 - type: max_ap value: 76.75437826834582 - type: max_f1 value: 66.4850136239782 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 93.12903172428328 - type: cos_sim_spearman value: 92.66381487060741 - type: euclidean_pearson value: 90.37278396708922 - type: euclidean_spearman value: 92.66381487060741 - type: manhattan_pearson value: 90.32503296540962 - type: manhattan_spearman value: 92.6902938354313 - task: type: Retrieval dataset: name: MTEB DBPedia-PL type: clarin-knext/dbpedia-pl config: default split: test revision: 76afe41d9af165cc40999fcaa92312b8b012064a metrics: - type: map_at_1 value: 8.83 - type: map_at_10 value: 18.326 - type: map_at_100 value: 26.496 - type: map_at_1000 value: 28.455000000000002 - type: map_at_3 value: 12.933 - type: map_at_5 value: 15.168000000000001 - type: mrr_at_1 value: 66.0 - type: mrr_at_10 value: 72.76700000000001 - type: mrr_at_100 value: 73.203 - type: mrr_at_1000 value: 73.219 - type: mrr_at_3 value: 71.458 - type: mrr_at_5 value: 72.246 - type: ndcg_at_1 value: 55.375 - type: ndcg_at_10 value: 41.3 - type: ndcg_at_100 value: 45.891 - type: ndcg_at_1000 value: 52.905 - type: ndcg_at_3 value: 46.472 - type: ndcg_at_5 value: 43.734 - type: precision_at_1 value: 66.0 - type: precision_at_10 value: 33.074999999999996 - type: precision_at_100 value: 11.094999999999999 - type: precision_at_1000 value: 2.374 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.0 - type: recall_at_1 value: 8.83 - type: recall_at_10 value: 22.587 - type: recall_at_100 value: 50.61600000000001 - type: recall_at_1000 value: 73.559 - type: recall_at_3 value: 13.688 - type: recall_at_5 value: 16.855 - task: type: Retrieval dataset: name: MTEB FiQA-PL type: clarin-knext/fiqa-pl config: default split: test revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e metrics: - type: map_at_1 value: 20.587 - type: map_at_10 value: 33.095 - type: map_at_100 value: 35.24 - type: map_at_1000 value: 35.429 - type: map_at_3 value: 28.626 - type: map_at_5 value: 31.136999999999997 - type: mrr_at_1 value: 40.586 - type: mrr_at_10 value: 49.033 - type: mrr_at_100 value: 49.952999999999996 - type: mrr_at_1000 value: 49.992 - type: mrr_at_3 value: 46.553 - type: mrr_at_5 value: 48.035 - type: ndcg_at_1 value: 40.586 - type: ndcg_at_10 value: 41.046 - type: ndcg_at_100 value: 48.586 - type: ndcg_at_1000 value: 51.634 - type: ndcg_at_3 value: 36.773 - type: ndcg_at_5 value: 38.389 - type: precision_at_1 value: 40.586 - type: precision_at_10 value: 11.466 - type: precision_at_100 value: 1.909 - type: precision_at_1000 value: 0.245 - type: precision_at_3 value: 24.434 - type: precision_at_5 value: 18.426000000000002 - type: recall_at_1 value: 20.587 - type: recall_at_10 value: 47.986000000000004 - type: recall_at_100 value: 75.761 - type: recall_at_1000 value: 94.065 - type: recall_at_3 value: 33.339 - type: recall_at_5 value: 39.765 - task: type: Retrieval dataset: name: MTEB HotpotQA-PL type: clarin-knext/hotpotqa-pl config: default split: test revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907 metrics: - type: map_at_1 value: 40.878 - type: map_at_10 value: 58.775999999999996 - type: map_at_100 value: 59.632 - type: map_at_1000 value: 59.707 - type: map_at_3 value: 56.074 - type: map_at_5 value: 57.629 - type: mrr_at_1 value: 81.756 - type: mrr_at_10 value: 86.117 - type: mrr_at_100 value: 86.299 - type: mrr_at_1000 value: 86.30600000000001 - type: mrr_at_3 value: 85.345 - type: mrr_at_5 value: 85.832 - type: ndcg_at_1 value: 81.756 - type: ndcg_at_10 value: 67.608 - type: ndcg_at_100 value: 70.575 - type: ndcg_at_1000 value: 71.99600000000001 - type: ndcg_at_3 value: 63.723 - type: ndcg_at_5 value: 65.70700000000001 - type: precision_at_1 value: 81.756 - type: precision_at_10 value: 13.619 - type: precision_at_100 value: 1.5939999999999999 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 39.604 - type: precision_at_5 value: 25.332 - type: recall_at_1 value: 40.878 - type: recall_at_10 value: 68.096 - type: recall_at_100 value: 79.696 - type: recall_at_1000 value: 89.082 - type: recall_at_3 value: 59.406000000000006 - type: recall_at_5 value: 63.329 - task: type: Retrieval dataset: name: MTEB MSMARCO-PL type: clarin-knext/msmarco-pl config: default split: test revision: 8634c07806d5cce3a6138e260e59b81760a0a640 metrics: - type: map_at_1 value: 2.1839999999999997 - type: map_at_10 value: 11.346 - type: map_at_100 value: 30.325000000000003 - type: map_at_1000 value: 37.806 - type: map_at_3 value: 4.842 - type: map_at_5 value: 6.891 - type: mrr_at_1 value: 86.047 - type: mrr_at_10 value: 89.14699999999999 - type: mrr_at_100 value: 89.46600000000001 - type: mrr_at_1000 value: 89.46600000000001 - type: mrr_at_3 value: 89.14699999999999 - type: mrr_at_5 value: 89.14699999999999 - type: ndcg_at_1 value: 67.829 - type: ndcg_at_10 value: 62.222 - type: ndcg_at_100 value: 55.337 - type: ndcg_at_1000 value: 64.076 - type: ndcg_at_3 value: 68.12700000000001 - type: ndcg_at_5 value: 64.987 - type: precision_at_1 value: 86.047 - type: precision_at_10 value: 69.535 - type: precision_at_100 value: 32.93 - type: precision_at_1000 value: 6.6049999999999995 - type: precision_at_3 value: 79.845 - type: precision_at_5 value: 75.349 - type: recall_at_1 value: 2.1839999999999997 - type: recall_at_10 value: 12.866 - type: recall_at_100 value: 43.505 - type: recall_at_1000 value: 72.366 - type: recall_at_3 value: 4.947 - type: recall_at_5 value: 7.192 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.75319435104238 - type: f1 value: 77.58961444860606 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 85.54472091459313 - type: f1 value: 84.29498563572106 - task: type: Retrieval dataset: name: MTEB NFCorpus-PL type: clarin-knext/nfcorpus-pl config: default split: test revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 metrics: - type: map_at_1 value: 4.367 - type: map_at_10 value: 10.38 - type: map_at_100 value: 13.516 - type: map_at_1000 value: 14.982000000000001 - type: map_at_3 value: 7.367 - type: map_at_5 value: 8.59 - type: mrr_at_1 value: 41.486000000000004 - type: mrr_at_10 value: 48.886 - type: mrr_at_100 value: 49.657000000000004 - type: mrr_at_1000 value: 49.713 - type: mrr_at_3 value: 46.904 - type: mrr_at_5 value: 48.065000000000005 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 30.885 - type: ndcg_at_100 value: 28.393 - type: ndcg_at_1000 value: 37.428 - type: ndcg_at_3 value: 35.394999999999996 - type: ndcg_at_5 value: 33.391999999999996 - type: precision_at_1 value: 41.486000000000004 - type: precision_at_10 value: 23.437 - type: precision_at_100 value: 7.638 - type: precision_at_1000 value: 2.0389999999999997 - type: precision_at_3 value: 32.817 - type: precision_at_5 value: 28.915999999999997 - type: recall_at_1 value: 4.367 - type: recall_at_10 value: 14.655000000000001 - type: recall_at_100 value: 29.665999999999997 - type: recall_at_1000 value: 62.073 - type: recall_at_3 value: 8.51 - type: recall_at_5 value: 10.689 - task: type: Retrieval dataset: name: MTEB NQ-PL type: clarin-knext/nq-pl config: default split: test revision: f171245712cf85dd4700b06bef18001578d0ca8d metrics: - type: map_at_1 value: 28.616000000000003 - type: map_at_10 value: 41.626000000000005 - type: map_at_100 value: 42.689 - type: map_at_1000 value: 42.733 - type: map_at_3 value: 37.729 - type: map_at_5 value: 39.879999999999995 - type: mrr_at_1 value: 32.068000000000005 - type: mrr_at_10 value: 44.029 - type: mrr_at_100 value: 44.87 - type: mrr_at_1000 value: 44.901 - type: mrr_at_3 value: 40.687 - type: mrr_at_5 value: 42.625 - type: ndcg_at_1 value: 32.068000000000005 - type: ndcg_at_10 value: 48.449999999999996 - type: ndcg_at_100 value: 53.13 - type: ndcg_at_1000 value: 54.186 - type: ndcg_at_3 value: 40.983999999999995 - type: ndcg_at_5 value: 44.628 - type: precision_at_1 value: 32.068000000000005 - type: precision_at_10 value: 7.9750000000000005 - type: precision_at_100 value: 1.061 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 18.404999999999998 - type: precision_at_5 value: 13.111 - type: recall_at_1 value: 28.616000000000003 - type: recall_at_10 value: 66.956 - type: recall_at_100 value: 87.657 - type: recall_at_1000 value: 95.548 - type: recall_at_3 value: 47.453 - type: recall_at_5 value: 55.87800000000001 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: None metrics: - type: accuracy value: 69.04141326382856 - type: ap value: 77.47589122111044 - type: f1 value: 66.6332277374775 - task: type: PairClassification dataset: name: MTEB PPC type: PL-MTEB/ppc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.4 - type: cos_sim_ap value: 94.1044939667201 - type: cos_sim_f1 value: 88.78048780487805 - type: cos_sim_precision value: 87.22044728434504 - type: cos_sim_recall value: 90.39735099337747 - type: dot_accuracy value: 86.4 - type: dot_ap value: 94.1044939667201 - type: dot_f1 value: 88.78048780487805 - type: dot_precision value: 87.22044728434504 - type: dot_recall value: 90.39735099337747 - type: euclidean_accuracy value: 86.4 - type: euclidean_ap value: 94.1044939667201 - type: euclidean_f1 value: 88.78048780487805 - type: euclidean_precision value: 87.22044728434504 - type: euclidean_recall value: 90.39735099337747 - type: manhattan_accuracy value: 86.4 - type: manhattan_ap value: 94.11438365697387 - type: manhattan_f1 value: 88.77968877968877 - type: manhattan_precision value: 87.84440842787681 - type: manhattan_recall value: 89.73509933774835 - type: max_accuracy value: 86.4 - type: max_ap value: 94.11438365697387 - type: max_f1 value: 88.78048780487805 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 97.86641929499072 - type: cos_sim_ap value: 99.36904211868182 - type: cos_sim_f1 value: 96.56203288490283 - type: cos_sim_precision value: 94.72140762463343 - type: cos_sim_recall value: 98.47560975609755 - type: dot_accuracy value: 97.86641929499072 - type: dot_ap value: 99.36904211868183 - type: dot_f1 value: 96.56203288490283 - type: dot_precision value: 94.72140762463343 - type: dot_recall value: 98.47560975609755 - type: euclidean_accuracy value: 97.86641929499072 - type: euclidean_ap value: 99.36904211868183 - type: euclidean_f1 value: 96.56203288490283 - type: euclidean_precision value: 94.72140762463343 - type: euclidean_recall value: 98.47560975609755 - type: manhattan_accuracy value: 98.14471243042672 - type: manhattan_ap value: 99.43359540492416 - type: manhattan_f1 value: 96.98795180722892 - type: manhattan_precision value: 95.83333333333334 - type: manhattan_recall value: 98.17073170731707 - type: max_accuracy value: 98.14471243042672 - type: max_ap value: 99.43359540492416 - type: max_f1 value: 96.98795180722892 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: None metrics: - type: accuracy value: 89.39058171745152 - type: f1 value: 86.8552093529568 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: None metrics: - type: accuracy value: 74.97975708502024 - type: f1 value: 58.73081628832407 - task: type: Retrieval dataset: name: MTEB Quora-PL type: clarin-knext/quora-pl config: default split: test revision: 0be27e93455051e531182b85e85e425aba12e9d4 metrics: - type: map_at_1 value: 64.917 - type: map_at_10 value: 78.74600000000001 - type: map_at_100 value: 79.501 - type: map_at_1000 value: 79.524 - type: map_at_3 value: 75.549 - type: map_at_5 value: 77.495 - type: mrr_at_1 value: 74.9 - type: mrr_at_10 value: 82.112 - type: mrr_at_100 value: 82.314 - type: mrr_at_1000 value: 82.317 - type: mrr_at_3 value: 80.745 - type: mrr_at_5 value: 81.607 - type: ndcg_at_1 value: 74.83999999999999 - type: ndcg_at_10 value: 83.214 - type: ndcg_at_100 value: 84.997 - type: ndcg_at_1000 value: 85.207 - type: ndcg_at_3 value: 79.547 - type: ndcg_at_5 value: 81.46600000000001 - type: precision_at_1 value: 74.83999999999999 - type: precision_at_10 value: 12.822 - type: precision_at_100 value: 1.506 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 34.903 - type: precision_at_5 value: 23.16 - type: recall_at_1 value: 64.917 - type: recall_at_10 value: 92.27199999999999 - type: recall_at_100 value: 98.715 - type: recall_at_1000 value: 99.854 - type: recall_at_3 value: 82.04599999999999 - type: recall_at_5 value: 87.2 - task: type: Retrieval dataset: name: MTEB SCIDOCS-PL type: clarin-knext/scidocs-pl config: default split: test revision: 45452b03f05560207ef19149545f168e596c9337 metrics: - type: map_at_1 value: 3.51 - type: map_at_10 value: 9.046999999999999 - type: map_at_100 value: 10.823 - type: map_at_1000 value: 11.144 - type: map_at_3 value: 6.257 - type: map_at_5 value: 7.648000000000001 - type: mrr_at_1 value: 17.299999999999997 - type: mrr_at_10 value: 27.419 - type: mrr_at_100 value: 28.618 - type: mrr_at_1000 value: 28.685 - type: mrr_at_3 value: 23.817 - type: mrr_at_5 value: 25.927 - type: ndcg_at_1 value: 17.299999999999997 - type: ndcg_at_10 value: 16.084 - type: ndcg_at_100 value: 23.729 - type: ndcg_at_1000 value: 29.476999999999997 - type: ndcg_at_3 value: 14.327000000000002 - type: ndcg_at_5 value: 13.017999999999999 - type: precision_at_1 value: 17.299999999999997 - type: precision_at_10 value: 8.63 - type: precision_at_100 value: 1.981 - type: precision_at_1000 value: 0.336 - type: precision_at_3 value: 13.4 - type: precision_at_5 value: 11.700000000000001 - type: recall_at_1 value: 3.51 - type: recall_at_10 value: 17.518 - type: recall_at_100 value: 40.275 - type: recall_at_1000 value: 68.203 - type: recall_at_3 value: 8.155 - type: recall_at_5 value: 11.875 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.30248675091724 - type: cos_sim_ap value: 83.6756734006714 - type: cos_sim_f1 value: 74.97367497367497 - type: cos_sim_precision value: 73.91003460207612 - type: cos_sim_recall value: 76.06837606837607 - type: dot_accuracy value: 86.30248675091724 - type: dot_ap value: 83.6756734006714 - type: dot_f1 value: 74.97367497367497 - type: dot_precision value: 73.91003460207612 - type: dot_recall value: 76.06837606837607 - type: euclidean_accuracy value: 86.30248675091724 - type: euclidean_ap value: 83.67566984333091 - type: euclidean_f1 value: 74.97367497367497 - type: euclidean_precision value: 73.91003460207612 - type: euclidean_recall value: 76.06837606837607 - type: manhattan_accuracy value: 86.28210354667753 - type: manhattan_ap value: 83.64216119130171 - type: manhattan_f1 value: 74.92152075340078 - type: manhattan_precision value: 73.4107997265892 - type: manhattan_recall value: 76.49572649572649 - type: max_accuracy value: 86.30248675091724 - type: max_ap value: 83.6756734006714 - type: max_f1 value: 74.97367497367497 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 82.23295940859121 - type: cos_sim_spearman value: 78.89329160768719 - type: euclidean_pearson value: 79.56019107076818 - type: euclidean_spearman value: 78.89330209904084 - type: manhattan_pearson value: 79.76098513973719 - type: manhattan_spearman value: 79.05490162570123 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 37.732606308062486 - type: cos_sim_spearman value: 41.01645667030284 - type: euclidean_pearson value: 26.61722556367085 - type: euclidean_spearman value: 41.01645667030284 - type: manhattan_pearson value: 26.60917378970807 - type: manhattan_spearman value: 41.51335727617614 - task: type: Retrieval dataset: name: MTEB SciFact-PL type: clarin-knext/scifact-pl config: default split: test revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e metrics: - type: map_at_1 value: 54.31700000000001 - type: map_at_10 value: 65.564 - type: map_at_100 value: 66.062 - type: map_at_1000 value: 66.08699999999999 - type: map_at_3 value: 62.592999999999996 - type: map_at_5 value: 63.888 - type: mrr_at_1 value: 56.99999999999999 - type: mrr_at_10 value: 66.412 - type: mrr_at_100 value: 66.85900000000001 - type: mrr_at_1000 value: 66.88 - type: mrr_at_3 value: 64.22200000000001 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 56.99999999999999 - type: ndcg_at_10 value: 70.577 - type: ndcg_at_100 value: 72.879 - type: ndcg_at_1000 value: 73.45 - type: ndcg_at_3 value: 65.5 - type: ndcg_at_5 value: 67.278 - type: precision_at_1 value: 56.99999999999999 - type: precision_at_10 value: 9.667 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.0 - type: precision_at_5 value: 16.933 - type: recall_at_1 value: 54.31700000000001 - type: recall_at_10 value: 85.056 - type: recall_at_100 value: 95.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 71.0 - type: recall_at_5 value: 75.672 - task: type: Retrieval dataset: name: MTEB TRECCOVID-PL type: clarin-knext/trec-covid-pl config: default split: test revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd metrics: - type: map_at_1 value: 0.245 - type: map_at_10 value: 2.051 - type: map_at_100 value: 12.009 - type: map_at_1000 value: 27.448 - type: map_at_3 value: 0.721 - type: map_at_5 value: 1.13 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.0 - type: mrr_at_100 value: 93.0 - type: mrr_at_1000 value: 93.0 - type: mrr_at_3 value: 93.0 - type: mrr_at_5 value: 93.0 - type: ndcg_at_1 value: 85.0 - type: ndcg_at_10 value: 80.303 - type: ndcg_at_100 value: 61.23499999999999 - type: ndcg_at_1000 value: 52.978 - type: ndcg_at_3 value: 84.419 - type: ndcg_at_5 value: 82.976 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 83.39999999999999 - type: precision_at_100 value: 61.96 - type: precision_at_1000 value: 22.648 - type: precision_at_3 value: 89.333 - type: precision_at_5 value: 87.2 - type: recall_at_1 value: 0.245 - type: recall_at_10 value: 2.193 - type: recall_at_100 value: 14.938 - type: recall_at_1000 value: 48.563 - type: recall_at_3 value: 0.738 - type: recall_at_5 value: 1.173 --- # beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ```
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
# beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ```
{"base_model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "license": "apache-2.0", "tags": ["mteb", "sentence-transformers", "transformers", "Qwen2", "sentence-similarity", "llama-cpp", "gguf-my-repo"], "model-index": [{"name": "gte-qwen2-7B-instruct", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 91.31343283582089}, {"type": "ap", "value": 67.64251402604096}, {"type": "f1", "value": 87.53372530755692}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 97.497825}, {"type": "ap", "value": 96.30329547047529}, {"type": "f1", "value": 97.49769793778039}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 62.564}, {"type": "f1", "value": 60.975777935041066}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "mteb/arguana", "config": "default", "split": "test", "revision": "c22ab2a51041ffd869aaddef7af8d8215647e41a"}, "metrics": [{"type": "map_at_1", "value": 36.486000000000004}, {"type": "map_at_10", "value": 54.842}, {"type": "map_at_100", "value": 55.206999999999994}, {"type": "map_at_1000", "value": 55.206999999999994}, {"type": "map_at_3", "value": 49.893}, {"type": "map_at_5", "value": 53.105000000000004}, {"type": "mrr_at_1", "value": 37.34}, {"type": "mrr_at_10", "value": 55.143}, {"type": "mrr_at_100", "value": 55.509}, {"type": "mrr_at_1000", "value": 55.509}, {"type": "mrr_at_3", "value": 50.212999999999994}, {"type": "mrr_at_5", "value": 53.432}, {"type": "ndcg_at_1", "value": 36.486000000000004}, {"type": "ndcg_at_10", "value": 64.273}, {"type": "ndcg_at_100", "value": 65.66199999999999}, {"type": "ndcg_at_1000", "value": 65.66199999999999}, {"type": "ndcg_at_3", "value": 54.352999999999994}, {"type": "ndcg_at_5", "value": 60.131}, {"type": "precision_at_1", "value": 36.486000000000004}, {"type": "precision_at_10", "value": 9.395000000000001}, {"type": "precision_at_100", "value": 0.996}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 22.428}, {"type": "precision_at_5", "value": 16.259}, {"type": "recall_at_1", "value": 36.486000000000004}, {"type": "recall_at_10", "value": 93.95400000000001}, {"type": "recall_at_100", "value": 99.644}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 67.283}, {"type": "recall_at_5", "value": 81.294}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 56.461169803700564}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 51.73600434466286}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 67.57827065898053}, {"type": "mrr", "value": 79.08136569493911}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.53324575999243}, {"type": "cos_sim_spearman", "value": 81.37173362822374}, {"type": "euclidean_pearson", "value": 82.19243335103444}, {"type": "euclidean_spearman", "value": 81.33679307304334}, {"type": "manhattan_pearson", "value": 82.38752665975699}, {"type": "manhattan_spearman", "value": 81.31510583189689}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 87.56818181818181}, {"type": "f1", "value": 87.25826722019875}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 50.09239610327673}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 46.64733054606282}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "f46a197baaae43b4f621051089b82a364682dfeb"}, "metrics": [{"type": "map_at_1", "value": 33.997}, {"type": "map_at_10", "value": 48.176}, {"type": "map_at_100", "value": 49.82}, {"type": "map_at_1000", "value": 49.924}, {"type": "map_at_3", "value": 43.626}, {"type": "map_at_5", "value": 46.275}, {"type": "mrr_at_1", "value": 42.059999999999995}, {"type": "mrr_at_10", "value": 53.726}, {"type": "mrr_at_100", "value": 54.398}, {"type": "mrr_at_1000", "value": 54.416}, {"type": "mrr_at_3", "value": 50.714999999999996}, {"type": "mrr_at_5", "value": 52.639}, {"type": "ndcg_at_1", "value": 42.059999999999995}, {"type": "ndcg_at_10", "value": 55.574999999999996}, {"type": "ndcg_at_100", "value": 60.744}, {"type": "ndcg_at_1000", "value": 61.85699999999999}, {"type": "ndcg_at_3", "value": 49.363}, {"type": "ndcg_at_5", "value": 52.44}, {"type": "precision_at_1", "value": 42.059999999999995}, {"type": "precision_at_10", "value": 11.101999999999999}, {"type": "precision_at_100", "value": 1.73}, {"type": "precision_at_1000", "value": 0.218}, {"type": "precision_at_3", "value": 24.464}, {"type": "precision_at_5", "value": 18.026}, {"type": "recall_at_1", "value": 33.997}, {"type": "recall_at_10", "value": 70.35900000000001}, {"type": "recall_at_100", "value": 91.642}, {"type": "recall_at_1000", "value": 97.977}, {"type": "recall_at_3", "value": 52.76}, {"type": "recall_at_5", "value": 61.148}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackEnglishRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "ad9991cb51e31e31e430383c75ffb2885547b5f0"}, "metrics": [{"type": "map_at_1", "value": 35.884}, {"type": "map_at_10", "value": 48.14}, {"type": "map_at_100", "value": 49.5}, {"type": "map_at_1000", "value": 49.63}, {"type": "map_at_3", "value": 44.646}, {"type": "map_at_5", "value": 46.617999999999995}, {"type": "mrr_at_1", "value": 44.458999999999996}, {"type": "mrr_at_10", "value": 53.751000000000005}, {"type": "mrr_at_100", "value": 54.37800000000001}, {"type": "mrr_at_1000", "value": 54.415}, {"type": "mrr_at_3", "value": 51.815}, {"type": "mrr_at_5", "value": 52.882}, {"type": "ndcg_at_1", "value": 44.458999999999996}, {"type": "ndcg_at_10", "value": 54.157}, {"type": "ndcg_at_100", "value": 58.362}, {"type": "ndcg_at_1000", "value": 60.178}, {"type": "ndcg_at_3", "value": 49.661}, {"type": "ndcg_at_5", "value": 51.74999999999999}, {"type": "precision_at_1", "value": 44.458999999999996}, {"type": "precision_at_10", "value": 10.248}, {"type": "precision_at_100", "value": 1.5890000000000002}, {"type": "precision_at_1000", "value": 0.207}, {"type": "precision_at_3", "value": 23.928}, {"type": "precision_at_5", "value": 16.878999999999998}, {"type": "recall_at_1", "value": 35.884}, {"type": "recall_at_10", "value": 64.798}, {"type": "recall_at_100", "value": 82.345}, {"type": "recall_at_1000", "value": 93.267}, {"type": "recall_at_3", "value": 51.847}, {"type": "recall_at_5", "value": 57.601}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGamingRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "4885aa143210c98657558c04aaf3dc47cfb54340"}, "metrics": [{"type": "map_at_1", "value": 39.383}, {"type": "map_at_10", "value": 53.714}, {"type": "map_at_100", "value": 54.838}, {"type": "map_at_1000", "value": 54.87800000000001}, {"type": "map_at_3", "value": 50.114999999999995}, {"type": "map_at_5", "value": 52.153000000000006}, {"type": "mrr_at_1", "value": 45.016}, {"type": "mrr_at_10", "value": 56.732000000000006}, {"type": "mrr_at_100", "value": 57.411}, {"type": "mrr_at_1000", "value": 57.431}, {"type": "mrr_at_3", "value": 54.044000000000004}, {"type": "mrr_at_5", "value": 55.639}, {"type": "ndcg_at_1", "value": 45.016}, {"type": "ndcg_at_10", "value": 60.228}, {"type": "ndcg_at_100", "value": 64.277}, {"type": "ndcg_at_1000", "value": 65.07}, {"type": "ndcg_at_3", "value": 54.124}, {"type": "ndcg_at_5", "value": 57.147000000000006}, {"type": "precision_at_1", "value": 45.016}, {"type": "precision_at_10", "value": 9.937}, {"type": "precision_at_100", "value": 1.288}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_3", "value": 24.471999999999998}, {"type": "precision_at_5", "value": 16.991}, {"type": "recall_at_1", "value": 39.383}, {"type": "recall_at_10", "value": 76.175}, {"type": "recall_at_100", "value": 93.02}, {"type": "recall_at_1000", "value": 98.60900000000001}, {"type": "recall_at_3", "value": 60.265}, {"type": "recall_at_5", "value": 67.46600000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGisRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "5003b3064772da1887988e05400cf3806fe491f2"}, "metrics": [{"type": "map_at_1", "value": 27.426000000000002}, {"type": "map_at_10", "value": 37.397000000000006}, {"type": "map_at_100", "value": 38.61}, {"type": "map_at_1000", "value": 38.678000000000004}, {"type": "map_at_3", "value": 34.150999999999996}, {"type": "map_at_5", "value": 36.137}, {"type": "mrr_at_1", "value": 29.944}, {"type": "mrr_at_10", "value": 39.654}, {"type": "mrr_at_100", "value": 40.638000000000005}, {"type": "mrr_at_1000", "value": 40.691}, {"type": "mrr_at_3", "value": 36.817}, {"type": "mrr_at_5", "value": 38.524}, {"type": "ndcg_at_1", "value": 29.944}, {"type": "ndcg_at_10", "value": 43.094}, {"type": "ndcg_at_100", "value": 48.789}, {"type": "ndcg_at_1000", "value": 50.339999999999996}, {"type": "ndcg_at_3", "value": 36.984}, {"type": "ndcg_at_5", "value": 40.248}, {"type": "precision_at_1", "value": 29.944}, {"type": "precision_at_10", "value": 6.78}, {"type": "precision_at_100", "value": 1.024}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 15.895000000000001}, {"type": "precision_at_5", "value": 11.39}, {"type": "recall_at_1", "value": 27.426000000000002}, {"type": "recall_at_10", "value": 58.464000000000006}, {"type": "recall_at_100", "value": 84.193}, {"type": "recall_at_1000", "value": 95.52000000000001}, {"type": "recall_at_3", "value": 42.172}, {"type": "recall_at_5", "value": 50.101}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackMathematicaRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "90fceea13679c63fe563ded68f3b6f06e50061de"}, "metrics": [{"type": "map_at_1", "value": 19.721}, {"type": "map_at_10", "value": 31.604}, {"type": "map_at_100", "value": 32.972}, {"type": "map_at_1000", "value": 33.077}, {"type": "map_at_3", "value": 27.218999999999998}, {"type": "map_at_5", "value": 29.53}, {"type": "mrr_at_1", "value": 25.0}, {"type": "mrr_at_10", "value": 35.843}, {"type": "mrr_at_100", "value": 36.785000000000004}, {"type": "mrr_at_1000", "value": 36.842000000000006}, {"type": "mrr_at_3", "value": 32.193}, {"type": "mrr_at_5", "value": 34.264}, {"type": "ndcg_at_1", "value": 25.0}, {"type": "ndcg_at_10", "value": 38.606}, {"type": "ndcg_at_100", "value": 44.272}, {"type": "ndcg_at_1000", "value": 46.527}, {"type": "ndcg_at_3", "value": 30.985000000000003}, {"type": "ndcg_at_5", "value": 34.43}, {"type": "precision_at_1", "value": 25.0}, {"type": "precision_at_10", "value": 7.811}, {"type": "precision_at_100", "value": 1.203}, {"type": "precision_at_1000", "value": 0.15}, {"type": "precision_at_3", "value": 15.423}, {"type": "precision_at_5", "value": 11.791}, {"type": "recall_at_1", "value": 19.721}, {"type": "recall_at_10", "value": 55.625}, {"type": "recall_at_100", "value": 79.34400000000001}, {"type": "recall_at_1000", "value": 95.208}, {"type": "recall_at_3", "value": 35.19}, {"type": "recall_at_5", "value": 43.626}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackPhysicsRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4"}, "metrics": [{"type": "map_at_1", "value": 33.784}, {"type": "map_at_10", "value": 47.522}, {"type": "map_at_100", "value": 48.949999999999996}, {"type": "map_at_1000", "value": 49.038}, {"type": "map_at_3", "value": 43.284}, {"type": "map_at_5", "value": 45.629}, {"type": "mrr_at_1", "value": 41.482}, {"type": "mrr_at_10", "value": 52.830999999999996}, {"type": "mrr_at_100", "value": 53.559999999999995}, {"type": "mrr_at_1000", "value": 53.588}, {"type": "mrr_at_3", "value": 50.016000000000005}, {"type": "mrr_at_5", "value": 51.614000000000004}, {"type": "ndcg_at_1", "value": 41.482}, {"type": "ndcg_at_10", "value": 54.569}, {"type": "ndcg_at_100", "value": 59.675999999999995}, {"type": "ndcg_at_1000", "value": 60.989000000000004}, {"type": "ndcg_at_3", "value": 48.187000000000005}, {"type": "ndcg_at_5", "value": 51.183}, {"type": "precision_at_1", "value": 41.482}, {"type": "precision_at_10", "value": 10.221}, {"type": "precision_at_100", "value": 1.486}, {"type": "precision_at_1000", "value": 0.17500000000000002}, {"type": "precision_at_3", "value": 23.548}, {"type": "precision_at_5", "value": 16.805}, {"type": "recall_at_1", "value": 33.784}, {"type": "recall_at_10", "value": 69.798}, {"type": "recall_at_100", "value": 90.098}, {"type": "recall_at_1000", "value": 98.176}, {"type": "recall_at_3", "value": 52.127}, {"type": "recall_at_5", "value": 59.861}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackProgrammersRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "6184bc1440d2dbc7612be22b50686b8826d22b32"}, "metrics": [{"type": "map_at_1", "value": 28.038999999999998}, {"type": "map_at_10", "value": 41.904}, {"type": "map_at_100", "value": 43.36}, {"type": "map_at_1000", "value": 43.453}, {"type": "map_at_3", "value": 37.785999999999994}, {"type": "map_at_5", "value": 40.105000000000004}, {"type": "mrr_at_1", "value": 35.046}, {"type": "mrr_at_10", "value": 46.926}, {"type": "mrr_at_100", "value": 47.815000000000005}, {"type": "mrr_at_1000", "value": 47.849000000000004}, {"type": "mrr_at_3", "value": 44.273}, {"type": "mrr_at_5", "value": 45.774}, {"type": "ndcg_at_1", "value": 35.046}, {"type": "ndcg_at_10", "value": 48.937000000000005}, {"type": "ndcg_at_100", "value": 54.544000000000004}, {"type": "ndcg_at_1000", "value": 56.069}, {"type": "ndcg_at_3", "value": 42.858000000000004}, {"type": "ndcg_at_5", "value": 45.644}, {"type": "precision_at_1", "value": 35.046}, {"type": "precision_at_10", "value": 9.452}, {"type": "precision_at_100", "value": 1.429}, {"type": "precision_at_1000", "value": 0.173}, {"type": "precision_at_3", "value": 21.346999999999998}, {"type": "precision_at_5", "value": 15.342}, {"type": "recall_at_1", "value": 28.038999999999998}, {"type": "recall_at_10", "value": 64.59700000000001}, {"type": "recall_at_100", "value": 87.735}, {"type": "recall_at_1000", "value": 97.41300000000001}, {"type": "recall_at_3", "value": 47.368}, {"type": "recall_at_5", "value": 54.93900000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "4ffe81d471b1924886b33c7567bfb200e9eec5c4"}, "metrics": [{"type": "map_at_1", "value": 28.17291666666667}, {"type": "map_at_10", "value": 40.025749999999995}, {"type": "map_at_100", "value": 41.39208333333333}, {"type": "map_at_1000", "value": 41.499249999999996}, {"type": "map_at_3", "value": 36.347}, {"type": "map_at_5", "value": 38.41391666666667}, {"type": "mrr_at_1", "value": 33.65925}, {"type": "mrr_at_10", "value": 44.085499999999996}, {"type": "mrr_at_100", "value": 44.94116666666667}, {"type": "mrr_at_1000", "value": 44.9855}, {"type": "mrr_at_3", "value": 41.2815}, {"type": "mrr_at_5", "value": 42.91491666666666}, {"type": "ndcg_at_1", "value": 33.65925}, {"type": "ndcg_at_10", "value": 46.430833333333325}, {"type": "ndcg_at_100", "value": 51.761}, {"type": "ndcg_at_1000", "value": 53.50899999999999}, {"type": "ndcg_at_3", "value": 40.45133333333333}, {"type": "ndcg_at_5", "value": 43.31483333333334}, {"type": "precision_at_1", "value": 33.65925}, {"type": "precision_at_10", "value": 8.4995}, {"type": "precision_at_100", "value": 1.3210000000000004}, {"type": "precision_at_1000", "value": 0.16591666666666666}, {"type": "precision_at_3", "value": 19.165083333333335}, {"type": "precision_at_5", "value": 13.81816666666667}, {"type": "recall_at_1", "value": 28.17291666666667}, {"type": "recall_at_10", "value": 61.12624999999999}, {"type": "recall_at_100", "value": 83.97266666666667}, {"type": "recall_at_1000", "value": 95.66550000000001}, {"type": "recall_at_3", "value": 44.661249999999995}, {"type": "recall_at_5", "value": 51.983333333333334}, {"type": "map_at_1", "value": 17.936}, {"type": "map_at_10", "value": 27.399}, {"type": "map_at_100", "value": 28.632}, {"type": "map_at_1000", "value": 28.738000000000003}, {"type": "map_at_3", "value": 24.456}, {"type": "map_at_5", "value": 26.06}, {"type": "mrr_at_1", "value": 19.224}, {"type": "mrr_at_10", "value": 28.998}, {"type": "mrr_at_100", "value": 30.11}, {"type": "mrr_at_1000", "value": 30.177}, {"type": "mrr_at_3", "value": 26.247999999999998}, {"type": "mrr_at_5", "value": 27.708}, {"type": "ndcg_at_1", "value": 19.224}, {"type": "ndcg_at_10", "value": 32.911}, {"type": "ndcg_at_100", "value": 38.873999999999995}, {"type": "ndcg_at_1000", "value": 41.277}, {"type": "ndcg_at_3", "value": 27.142}, {"type": "ndcg_at_5", "value": 29.755}, {"type": "precision_at_1", "value": 19.224}, {"type": "precision_at_10", "value": 5.6930000000000005}, {"type": "precision_at_100", "value": 0.9259999999999999}, {"type": "precision_at_1000", "value": 0.126}, {"type": "precision_at_3", "value": 12.138}, {"type": "precision_at_5", "value": 8.909}, {"type": "recall_at_1", "value": 17.936}, {"type": "recall_at_10", "value": 48.096}, {"type": "recall_at_100", "value": 75.389}, {"type": "recall_at_1000", "value": 92.803}, {"type": "recall_at_3", "value": 32.812999999999995}, {"type": "recall_at_5", "value": 38.851}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackStatsRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "65ac3a16b8e91f9cee4c9828cc7c335575432a2a"}, "metrics": [{"type": "map_at_1", "value": 24.681}, {"type": "map_at_10", "value": 34.892}, {"type": "map_at_100", "value": 35.996}, {"type": "map_at_1000", "value": 36.083}, {"type": "map_at_3", "value": 31.491999999999997}, {"type": "map_at_5", "value": 33.632}, {"type": "mrr_at_1", "value": 28.528}, {"type": "mrr_at_10", "value": 37.694}, {"type": "mrr_at_100", "value": 38.613}, {"type": "mrr_at_1000", "value": 38.668}, {"type": "mrr_at_3", "value": 34.714}, {"type": "mrr_at_5", "value": 36.616}, {"type": "ndcg_at_1", "value": 28.528}, {"type": "ndcg_at_10", "value": 40.703}, {"type": "ndcg_at_100", "value": 45.993}, {"type": "ndcg_at_1000", "value": 47.847}, {"type": "ndcg_at_3", "value": 34.622}, {"type": "ndcg_at_5", "value": 38.035999999999994}, {"type": "precision_at_1", "value": 28.528}, {"type": "precision_at_10", "value": 6.902}, {"type": "precision_at_100", "value": 1.0370000000000001}, {"type": "precision_at_1000", "value": 0.126}, {"type": "precision_at_3", "value": 15.798000000000002}, {"type": "precision_at_5", "value": 11.655999999999999}, {"type": "recall_at_1", "value": 24.681}, {"type": "recall_at_10", "value": 55.81}, {"type": "recall_at_100", "value": 79.785}, {"type": "recall_at_1000", "value": 92.959}, {"type": "recall_at_3", "value": 39.074}, {"type": "recall_at_5", "value": 47.568}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackTexRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "46989137a86843e03a6195de44b09deda022eec7"}, "metrics": [{"type": "map_at_1", "value": 18.627}, {"type": "map_at_10", "value": 27.872000000000003}, {"type": "map_at_100", "value": 29.237999999999996}, {"type": "map_at_1000", "value": 29.363}, {"type": "map_at_3", "value": 24.751}, {"type": "map_at_5", "value": 26.521}, {"type": "mrr_at_1", "value": 23.021}, {"type": "mrr_at_10", "value": 31.924000000000003}, {"type": "mrr_at_100", "value": 32.922000000000004}, {"type": "mrr_at_1000", "value": 32.988}, {"type": "mrr_at_3", "value": 29.192}, {"type": "mrr_at_5", "value": 30.798}, {"type": "ndcg_at_1", "value": 23.021}, {"type": "ndcg_at_10", "value": 33.535}, {"type": "ndcg_at_100", "value": 39.732}, {"type": "ndcg_at_1000", "value": 42.201}, {"type": "ndcg_at_3", "value": 28.153}, {"type": "ndcg_at_5", "value": 30.746000000000002}, {"type": "precision_at_1", "value": 23.021}, {"type": "precision_at_10", "value": 6.459}, {"type": "precision_at_100", "value": 1.1320000000000001}, {"type": "precision_at_1000", "value": 0.153}, {"type": "precision_at_3", "value": 13.719000000000001}, {"type": "precision_at_5", "value": 10.193000000000001}, {"type": "recall_at_1", "value": 18.627}, {"type": "recall_at_10", "value": 46.463}, {"type": "recall_at_100", "value": 74.226}, {"type": "recall_at_1000", "value": 91.28500000000001}, {"type": "recall_at_3", "value": 31.357000000000003}, {"type": "recall_at_5", "value": 38.067}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackUnixRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "6c6430d3a6d36f8d2a829195bc5dc94d7e063e53"}, "metrics": [{"type": "map_at_1", "value": 31.457}, {"type": "map_at_10", "value": 42.888}, {"type": "map_at_100", "value": 44.24}, {"type": "map_at_1000", "value": 44.327}, {"type": "map_at_3", "value": 39.588}, {"type": "map_at_5", "value": 41.423}, {"type": "mrr_at_1", "value": 37.126999999999995}, {"type": "mrr_at_10", "value": 47.083000000000006}, {"type": "mrr_at_100", "value": 47.997}, {"type": "mrr_at_1000", "value": 48.044}, {"type": "mrr_at_3", "value": 44.574000000000005}, {"type": "mrr_at_5", "value": 46.202}, {"type": "ndcg_at_1", "value": 37.126999999999995}, {"type": "ndcg_at_10", "value": 48.833}, {"type": "ndcg_at_100", "value": 54.327000000000005}, {"type": "ndcg_at_1000", "value": 56.011}, {"type": "ndcg_at_3", "value": 43.541999999999994}, {"type": "ndcg_at_5", "value": 46.127}, {"type": "precision_at_1", "value": 37.126999999999995}, {"type": "precision_at_10", "value": 8.376999999999999}, {"type": "precision_at_100", "value": 1.2309999999999999}, {"type": "precision_at_1000", "value": 0.146}, {"type": "precision_at_3", "value": 20.211000000000002}, {"type": "precision_at_5", "value": 14.16}, {"type": "recall_at_1", "value": 31.457}, {"type": "recall_at_10", "value": 62.369}, {"type": "recall_at_100", "value": 85.444}, {"type": "recall_at_1000", "value": 96.65599999999999}, {"type": "recall_at_3", "value": 47.961}, {"type": "recall_at_5", "value": 54.676}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWebmastersRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "160c094312a0e1facb97e55eeddb698c0abe3571"}, "metrics": [{"type": "map_at_1", "value": 27.139999999999997}, {"type": "map_at_10", "value": 38.801}, {"type": "map_at_100", "value": 40.549}, {"type": "map_at_1000", "value": 40.802}, {"type": "map_at_3", "value": 35.05}, {"type": "map_at_5", "value": 36.884}, {"type": "mrr_at_1", "value": 33.004}, {"type": "mrr_at_10", "value": 43.864}, {"type": "mrr_at_100", "value": 44.667}, {"type": "mrr_at_1000", "value": 44.717}, {"type": "mrr_at_3", "value": 40.777}, {"type": "mrr_at_5", "value": 42.319}, {"type": "ndcg_at_1", "value": 33.004}, {"type": "ndcg_at_10", "value": 46.022}, {"type": "ndcg_at_100", "value": 51.542}, {"type": "ndcg_at_1000", "value": 53.742000000000004}, {"type": "ndcg_at_3", "value": 39.795}, {"type": "ndcg_at_5", "value": 42.272}, {"type": "precision_at_1", "value": 33.004}, {"type": "precision_at_10", "value": 9.012}, {"type": "precision_at_100", "value": 1.7770000000000001}, {"type": "precision_at_1000", "value": 0.26}, {"type": "precision_at_3", "value": 19.038}, {"type": "precision_at_5", "value": 13.675999999999998}, {"type": "recall_at_1", "value": 27.139999999999997}, {"type": "recall_at_10", "value": 60.961}, {"type": "recall_at_100", "value": 84.451}, {"type": "recall_at_1000", "value": 98.113}, {"type": "recall_at_3", "value": 43.001}, {"type": "recall_at_5", "value": 49.896}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "mteb/climate-fever", "config": "default", "split": "test", "revision": "47f2ac6acb640fc46020b02a5b59fdda04d39380"}, "metrics": [{"type": "map_at_1", "value": 22.076999999999998}, {"type": "map_at_10", "value": 35.44}, {"type": "map_at_100", "value": 37.651}, {"type": "map_at_1000", "value": 37.824999999999996}, {"type": "map_at_3", "value": 30.764999999999997}, {"type": "map_at_5", "value": 33.26}, {"type": "mrr_at_1", "value": 50.163000000000004}, {"type": "mrr_at_10", "value": 61.207}, {"type": "mrr_at_100", "value": 61.675000000000004}, {"type": "mrr_at_1000", "value": 61.692}, {"type": "mrr_at_3", "value": 58.60999999999999}, {"type": "mrr_at_5", "value": 60.307}, {"type": "ndcg_at_1", "value": 50.163000000000004}, {"type": "ndcg_at_10", "value": 45.882}, {"type": "ndcg_at_100", "value": 53.239999999999995}, {"type": "ndcg_at_1000", "value": 55.852000000000004}, {"type": "ndcg_at_3", "value": 40.514}, {"type": "ndcg_at_5", "value": 42.038}, {"type": "precision_at_1", "value": 50.163000000000004}, {"type": "precision_at_10", "value": 13.466000000000001}, {"type": "precision_at_100", "value": 2.164}, {"type": "precision_at_1000", "value": 0.266}, {"type": "precision_at_3", "value": 29.707}, {"type": "precision_at_5", "value": 21.694}, {"type": "recall_at_1", "value": 22.076999999999998}, {"type": "recall_at_10", "value": 50.193}, {"type": "recall_at_100", "value": 74.993}, {"type": "recall_at_1000", "value": 89.131}, {"type": "recall_at_3", "value": 35.472}, {"type": "recall_at_5", "value": 41.814}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "mteb/dbpedia", "config": "default", "split": "test", "revision": "c0f706b76e590d620bd6618b3ca8efdd34e2d659"}, "metrics": [{"type": "map_at_1", "value": 9.953}, {"type": "map_at_10", "value": 24.515}, {"type": "map_at_100", "value": 36.173}, {"type": "map_at_1000", "value": 38.351}, {"type": "map_at_3", "value": 16.592000000000002}, {"type": "map_at_5", "value": 20.036}, {"type": "mrr_at_1", "value": 74.25}, {"type": "mrr_at_10", "value": 81.813}, {"type": "mrr_at_100", "value": 82.006}, {"type": "mrr_at_1000", "value": 82.011}, {"type": "mrr_at_3", "value": 80.875}, {"type": "mrr_at_5", "value": 81.362}, {"type": "ndcg_at_1", "value": 62.5}, {"type": "ndcg_at_10", "value": 52.42}, {"type": "ndcg_at_100", "value": 56.808}, {"type": "ndcg_at_1000", "value": 63.532999999999994}, {"type": "ndcg_at_3", "value": 56.654}, {"type": "ndcg_at_5", "value": 54.18300000000001}, {"type": "precision_at_1", "value": 74.25}, {"type": "precision_at_10", "value": 42.699999999999996}, {"type": "precision_at_100", "value": 13.675}, {"type": "precision_at_1000", "value": 2.664}, {"type": "precision_at_3", "value": 60.5}, {"type": "precision_at_5", "value": 52.800000000000004}, {"type": "recall_at_1", "value": 9.953}, {"type": "recall_at_10", "value": 30.253999999999998}, {"type": "recall_at_100", "value": 62.516000000000005}, {"type": "recall_at_1000", "value": 84.163}, {"type": "recall_at_3", "value": 18.13}, {"type": "recall_at_5", "value": 22.771}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 79.455}, {"type": "f1", "value": 74.16798697647569}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "mteb/fever", "config": "default", "split": "test", "revision": "bea83ef9e8fb933d90a2f1d5515737465d613e12"}, "metrics": [{"type": "map_at_1", "value": 87.531}, {"type": "map_at_10", "value": 93.16799999999999}, {"type": "map_at_100", "value": 93.341}, {"type": "map_at_1000", "value": 93.349}, {"type": "map_at_3", "value": 92.444}, {"type": "map_at_5", "value": 92.865}, {"type": "mrr_at_1", "value": 94.014}, {"type": "mrr_at_10", "value": 96.761}, {"type": "mrr_at_100", "value": 96.762}, {"type": "mrr_at_1000", "value": 96.762}, {"type": "mrr_at_3", "value": 96.672}, {"type": "mrr_at_5", "value": 96.736}, {"type": "ndcg_at_1", "value": 94.014}, {"type": "ndcg_at_10", "value": 95.112}, {"type": "ndcg_at_100", "value": 95.578}, {"type": "ndcg_at_1000", "value": 95.68900000000001}, {"type": "ndcg_at_3", "value": 94.392}, {"type": "ndcg_at_5", "value": 94.72500000000001}, {"type": "precision_at_1", "value": 94.014}, {"type": "precision_at_10", "value": 11.065}, {"type": "precision_at_100", "value": 1.157}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 35.259}, {"type": "precision_at_5", "value": 21.599}, {"type": "recall_at_1", "value": 87.531}, {"type": "recall_at_10", "value": 97.356}, {"type": "recall_at_100", "value": 98.965}, {"type": "recall_at_1000", "value": 99.607}, {"type": "recall_at_3", "value": 95.312}, {"type": "recall_at_5", "value": 96.295}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "mteb/fiqa", "config": "default", "split": "test", "revision": "27a168819829fe9bcd655c2df245fb19452e8e06"}, "metrics": [{"type": "map_at_1", "value": 32.055}, {"type": "map_at_10", "value": 53.114}, {"type": "map_at_100", "value": 55.235}, {"type": "map_at_1000", "value": 55.345}, {"type": "map_at_3", "value": 45.854}, {"type": "map_at_5", "value": 50.025}, {"type": "mrr_at_1", "value": 60.34}, {"type": "mrr_at_10", "value": 68.804}, {"type": "mrr_at_100", "value": 69.309}, {"type": "mrr_at_1000", "value": 69.32199999999999}, {"type": "mrr_at_3", "value": 66.40899999999999}, {"type": "mrr_at_5", "value": 67.976}, {"type": "ndcg_at_1", "value": 60.34}, {"type": "ndcg_at_10", "value": 62.031000000000006}, {"type": "ndcg_at_100", "value": 68.00500000000001}, {"type": "ndcg_at_1000", "value": 69.286}, {"type": "ndcg_at_3", "value": 56.355999999999995}, {"type": "ndcg_at_5", "value": 58.687}, {"type": "precision_at_1", "value": 60.34}, {"type": "precision_at_10", "value": 17.176}, {"type": "precision_at_100", "value": 2.36}, {"type": "precision_at_1000", "value": 0.259}, {"type": "precision_at_3", "value": 37.14}, {"type": "precision_at_5", "value": 27.809}, {"type": "recall_at_1", "value": 32.055}, {"type": "recall_at_10", "value": 70.91}, {"type": "recall_at_100", "value": 91.83}, {"type": "recall_at_1000", "value": 98.871}, {"type": "recall_at_3", "value": 51.202999999999996}, {"type": "recall_at_5", "value": 60.563}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "mteb/hotpotqa", "config": "default", "split": "test", "revision": "ab518f4d6fcca38d87c25209f94beba119d02014"}, "metrics": [{"type": "map_at_1", "value": 43.68}, {"type": "map_at_10", "value": 64.389}, {"type": "map_at_100", "value": 65.24}, {"type": "map_at_1000", "value": 65.303}, {"type": "map_at_3", "value": 61.309000000000005}, {"type": "map_at_5", "value": 63.275999999999996}, {"type": "mrr_at_1", "value": 87.36}, {"type": "mrr_at_10", "value": 91.12}, {"type": "mrr_at_100", "value": 91.227}, {"type": "mrr_at_1000", "value": 91.229}, {"type": "mrr_at_3", "value": 90.57600000000001}, {"type": "mrr_at_5", "value": 90.912}, {"type": "ndcg_at_1", "value": 87.36}, {"type": "ndcg_at_10", "value": 73.076}, {"type": "ndcg_at_100", "value": 75.895}, {"type": "ndcg_at_1000", "value": 77.049}, {"type": "ndcg_at_3", "value": 68.929}, {"type": "ndcg_at_5", "value": 71.28}, {"type": "precision_at_1", "value": 87.36}, {"type": "precision_at_10", "value": 14.741000000000001}, {"type": "precision_at_100", "value": 1.694}, {"type": "precision_at_1000", "value": 0.185}, {"type": "precision_at_3", "value": 43.043}, {"type": "precision_at_5", "value": 27.681}, {"type": "recall_at_1", "value": 43.68}, {"type": "recall_at_10", "value": 73.707}, {"type": "recall_at_100", "value": 84.7}, {"type": "recall_at_1000", "value": 92.309}, {"type": "recall_at_3", "value": 64.564}, {"type": "recall_at_5", "value": 69.203}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 96.75399999999999}, {"type": "ap", "value": 95.29389839242187}, {"type": "f1", "value": 96.75348377433475}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "mteb/msmarco", "config": "default", "split": "dev", "revision": "c5a29a104738b98a9e76336939199e264163d4a0"}, "metrics": [{"type": "map_at_1", "value": 25.176}, {"type": "map_at_10", "value": 38.598}, {"type": "map_at_100", "value": 39.707}, {"type": "map_at_1000", "value": 39.744}, {"type": "map_at_3", "value": 34.566}, {"type": "map_at_5", "value": 36.863}, {"type": "mrr_at_1", "value": 25.874000000000002}, {"type": "mrr_at_10", "value": 39.214}, {"type": "mrr_at_100", "value": 40.251}, {"type": "mrr_at_1000", "value": 40.281}, {"type": "mrr_at_3", "value": 35.291}, {"type": "mrr_at_5", "value": 37.545}, {"type": "ndcg_at_1", "value": 25.874000000000002}, {"type": "ndcg_at_10", "value": 45.98}, {"type": "ndcg_at_100", "value": 51.197}, {"type": "ndcg_at_1000", "value": 52.073}, {"type": "ndcg_at_3", "value": 37.785999999999994}, {"type": "ndcg_at_5", "value": 41.870000000000005}, {"type": "precision_at_1", "value": 25.874000000000002}, {"type": "precision_at_10", "value": 7.181}, {"type": "precision_at_100", "value": 0.979}, {"type": "precision_at_1000", "value": 0.106}, {"type": "precision_at_3", "value": 16.051000000000002}, {"type": "precision_at_5", "value": 11.713}, {"type": "recall_at_1", "value": 25.176}, {"type": "recall_at_10", "value": 68.67699999999999}, {"type": "recall_at_100", "value": 92.55}, {"type": "recall_at_1000", "value": 99.164}, {"type": "recall_at_3", "value": 46.372}, {"type": "recall_at_5", "value": 56.16}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 99.03784769721841}, {"type": "f1", "value": 98.97791641821495}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 91.88326493388054}, {"type": "f1", "value": 73.74809928034335}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 85.41358439811701}, {"type": "f1", "value": 83.503679460639}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 89.77135171486215}, {"type": "f1", "value": 88.89843747468366}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 46.22695362087359}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 44.132372165849425}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 33.35680810650402}, {"type": "mrr", "value": 34.72625715637218}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "mteb/nfcorpus", "config": "default", "split": "test", "revision": "ec0fa4fe99da2ff19ca1214b7966684033a58814"}, "metrics": [{"type": "map_at_1", "value": 7.165000000000001}, {"type": "map_at_10", "value": 15.424}, {"type": "map_at_100", "value": 20.28}, {"type": "map_at_1000", "value": 22.065}, {"type": "map_at_3", "value": 11.236}, {"type": "map_at_5", "value": 13.025999999999998}, {"type": "mrr_at_1", "value": 51.702999999999996}, {"type": "mrr_at_10", "value": 59.965}, {"type": "mrr_at_100", "value": 60.667}, {"type": "mrr_at_1000", "value": 60.702999999999996}, {"type": "mrr_at_3", "value": 58.772000000000006}, {"type": "mrr_at_5", "value": 59.267}, {"type": "ndcg_at_1", "value": 49.536}, {"type": "ndcg_at_10", "value": 40.6}, {"type": "ndcg_at_100", "value": 37.848}, {"type": "ndcg_at_1000", "value": 46.657}, {"type": "ndcg_at_3", "value": 46.117999999999995}, {"type": "ndcg_at_5", "value": 43.619}, {"type": "precision_at_1", "value": 51.393}, {"type": "precision_at_10", "value": 30.31}, {"type": "precision_at_100", "value": 9.972}, {"type": "precision_at_1000", "value": 2.329}, {"type": "precision_at_3", "value": 43.137}, {"type": "precision_at_5", "value": 37.585}, {"type": "recall_at_1", "value": 7.165000000000001}, {"type": "recall_at_10", "value": 19.689999999999998}, {"type": "recall_at_100", "value": 39.237}, {"type": "recall_at_1000", "value": 71.417}, {"type": "recall_at_3", "value": 12.247}, {"type": "recall_at_5", "value": 14.902999999999999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "mteb/nq", "config": "default", "split": "test", "revision": "b774495ed302d8c44a3a7ea25c90dbce03968f31"}, "metrics": [{"type": "map_at_1", "value": 42.653999999999996}, {"type": "map_at_10", "value": 59.611999999999995}, {"type": "map_at_100", "value": 60.32300000000001}, {"type": "map_at_1000", "value": 60.336}, {"type": "map_at_3", "value": 55.584999999999994}, {"type": "map_at_5", "value": 58.19}, {"type": "mrr_at_1", "value": 47.683}, {"type": "mrr_at_10", "value": 62.06700000000001}, {"type": "mrr_at_100", "value": 62.537}, {"type": "mrr_at_1000", "value": 62.544999999999995}, {"type": "mrr_at_3", "value": 59.178}, {"type": "mrr_at_5", "value": 61.034}, {"type": "ndcg_at_1", "value": 47.654}, {"type": "ndcg_at_10", "value": 67.001}, {"type": "ndcg_at_100", "value": 69.73899999999999}, {"type": "ndcg_at_1000", "value": 69.986}, {"type": "ndcg_at_3", "value": 59.95700000000001}, {"type": "ndcg_at_5", "value": 64.025}, {"type": "precision_at_1", "value": 47.654}, {"type": "precision_at_10", "value": 10.367999999999999}, {"type": "precision_at_100", "value": 1.192}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_3", "value": 26.651000000000003}, {"type": "precision_at_5", "value": 18.459}, {"type": "recall_at_1", "value": 42.653999999999996}, {"type": "recall_at_10", "value": 86.619}, {"type": "recall_at_100", "value": 98.04899999999999}, {"type": "recall_at_1000", "value": 99.812}, {"type": "recall_at_3", "value": 68.987}, {"type": "recall_at_5", "value": 78.158}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "mteb/quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 72.538}, {"type": "map_at_10", "value": 86.702}, {"type": "map_at_100", "value": 87.31}, {"type": "map_at_1000", "value": 87.323}, {"type": "map_at_3", "value": 83.87}, {"type": "map_at_5", "value": 85.682}, {"type": "mrr_at_1", "value": 83.31}, {"type": "mrr_at_10", "value": 89.225}, {"type": "mrr_at_100", "value": 89.30399999999999}, {"type": "mrr_at_1000", "value": 89.30399999999999}, {"type": "mrr_at_3", "value": 88.44300000000001}, {"type": "mrr_at_5", "value": 89.005}, {"type": "ndcg_at_1", "value": 83.32000000000001}, {"type": "ndcg_at_10", "value": 90.095}, {"type": "ndcg_at_100", "value": 91.12}, {"type": "ndcg_at_1000", "value": 91.179}, {"type": "ndcg_at_3", "value": 87.606}, {"type": "ndcg_at_5", "value": 89.031}, {"type": "precision_at_1", "value": 83.32000000000001}, {"type": "precision_at_10", "value": 13.641}, {"type": "precision_at_100", "value": 1.541}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 38.377}, {"type": "precision_at_5", "value": 25.162000000000003}, {"type": "recall_at_1", "value": 72.538}, {"type": "recall_at_10", "value": 96.47200000000001}, {"type": "recall_at_100", "value": 99.785}, {"type": "recall_at_1000", "value": 99.99900000000001}, {"type": "recall_at_3", "value": 89.278}, {"type": "recall_at_5", "value": 93.367}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 73.55219145406065}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 74.13437105242755}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "mteb/scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.873}, {"type": "map_at_10", "value": 17.944}, {"type": "map_at_100", "value": 21.171}, {"type": "map_at_1000", "value": 21.528}, {"type": "map_at_3", "value": 12.415}, {"type": "map_at_5", "value": 15.187999999999999}, {"type": "mrr_at_1", "value": 33.800000000000004}, {"type": "mrr_at_10", "value": 46.455}, {"type": "mrr_at_100", "value": 47.378}, {"type": "mrr_at_1000", "value": 47.394999999999996}, {"type": "mrr_at_3", "value": 42.367}, {"type": "mrr_at_5", "value": 44.972}, {"type": "ndcg_at_1", "value": 33.800000000000004}, {"type": "ndcg_at_10", "value": 28.907}, {"type": "ndcg_at_100", "value": 39.695}, {"type": "ndcg_at_1000", "value": 44.582}, {"type": "ndcg_at_3", "value": 26.949}, {"type": "ndcg_at_5", "value": 23.988}, {"type": "precision_at_1", "value": 33.800000000000004}, {"type": "precision_at_10", "value": 15.079999999999998}, {"type": "precision_at_100", "value": 3.056}, {"type": "precision_at_1000", "value": 0.42100000000000004}, {"type": "precision_at_3", "value": 25.167}, {"type": "precision_at_5", "value": 21.26}, {"type": "recall_at_1", "value": 6.873}, {"type": "recall_at_10", "value": 30.568}, {"type": "recall_at_100", "value": 62.062}, {"type": "recall_at_1000", "value": 85.37700000000001}, {"type": "recall_at_3", "value": 15.312999999999999}, {"type": "recall_at_5", "value": 21.575}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.37009118256057}, {"type": "cos_sim_spearman", "value": 79.27986395671529}, {"type": "euclidean_pearson", "value": 79.18037715442115}, {"type": "euclidean_spearman", "value": 79.28004791561621}, {"type": "manhattan_pearson", "value": 79.34062972800541}, {"type": "manhattan_spearman", "value": 79.43106695543402}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.48474767383833}, {"type": "cos_sim_spearman", "value": 79.54505388752513}, {"type": "euclidean_pearson", "value": 83.43282704179565}, {"type": "euclidean_spearman", "value": 79.54579919925405}, {"type": "manhattan_pearson", "value": 83.77564492427952}, {"type": "manhattan_spearman", "value": 79.84558396989286}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.803698035802}, {"type": "cos_sim_spearman", "value": 88.83451367754881}, {"type": "euclidean_pearson", "value": 88.28939285711628}, {"type": "euclidean_spearman", "value": 88.83528996073112}, {"type": "manhattan_pearson", "value": 88.28017412671795}, {"type": "manhattan_spearman", "value": 88.9228828016344}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.27469288153428}, {"type": "cos_sim_spearman", "value": 83.87477064876288}, {"type": "euclidean_pearson", "value": 84.2601737035379}, {"type": "euclidean_spearman", "value": 83.87431082479074}, {"type": "manhattan_pearson", "value": 84.3621547772745}, {"type": "manhattan_spearman", "value": 84.12094375000423}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.12749863201587}, {"type": "cos_sim_spearman", "value": 88.54287568368565}, {"type": "euclidean_pearson", "value": 87.90429700607999}, {"type": "euclidean_spearman", "value": 88.5437689576261}, {"type": "manhattan_pearson", "value": 88.19276653356833}, {"type": "manhattan_spearman", "value": 88.99995393814679}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.68398747560902}, {"type": "cos_sim_spearman", "value": 86.48815303460574}, {"type": "euclidean_pearson", "value": 85.52356631237954}, {"type": "euclidean_spearman", "value": 86.486391949551}, {"type": "manhattan_pearson", "value": 85.67267981761788}, {"type": "manhattan_spearman", "value": 86.7073696332485}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.9057107443124}, {"type": "cos_sim_spearman", "value": 88.7312168757697}, {"type": "euclidean_pearson", "value": 88.72810439714794}, {"type": "euclidean_spearman", "value": 88.71976185854771}, {"type": "manhattan_pearson", "value": 88.50433745949111}, {"type": "manhattan_spearman", "value": 88.51726175544195}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 67.59391795109886}, {"type": "cos_sim_spearman", "value": 66.87613008631367}, {"type": "euclidean_pearson", "value": 69.23198488262217}, {"type": "euclidean_spearman", "value": 66.85427723013692}, {"type": "manhattan_pearson", "value": 69.50730124841084}, {"type": "manhattan_spearman", "value": 67.10404669820792}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.0820605344619}, {"type": "cos_sim_spearman", "value": 86.8518089863434}, {"type": "euclidean_pearson", "value": 86.31087134689284}, {"type": "euclidean_spearman", "value": 86.8518520517941}, {"type": "manhattan_pearson", "value": 86.47203796160612}, {"type": "manhattan_spearman", "value": 87.1080149734421}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 89.09255369305481}, {"type": "mrr", "value": 97.10323445617563}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "mteb/scifact", "config": "default", "split": "test", "revision": "0228b52cf27578f30900b9e5271d331663a030d7"}, "metrics": [{"type": "map_at_1", "value": 61.260999999999996}, {"type": "map_at_10", "value": 74.043}, {"type": "map_at_100", "value": 74.37700000000001}, {"type": "map_at_1000", "value": 74.384}, {"type": "map_at_3", "value": 71.222}, {"type": "map_at_5", "value": 72.875}, {"type": "mrr_at_1", "value": 64.333}, {"type": "mrr_at_10", "value": 74.984}, {"type": "mrr_at_100", "value": 75.247}, {"type": "mrr_at_1000", "value": 75.25500000000001}, {"type": "mrr_at_3", "value": 73.167}, {"type": "mrr_at_5", "value": 74.35000000000001}, {"type": "ndcg_at_1", "value": 64.333}, {"type": "ndcg_at_10", "value": 79.06}, {"type": "ndcg_at_100", "value": 80.416}, {"type": "ndcg_at_1000", "value": 80.55600000000001}, {"type": "ndcg_at_3", "value": 74.753}, {"type": "ndcg_at_5", "value": 76.97500000000001}, {"type": "precision_at_1", "value": 64.333}, {"type": "precision_at_10", "value": 10.567}, {"type": "precision_at_100", "value": 1.1199999999999999}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 29.889}, {"type": "precision_at_5", "value": 19.533}, {"type": "recall_at_1", "value": 61.260999999999996}, {"type": "recall_at_10", "value": 93.167}, {"type": "recall_at_100", "value": 99.0}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 81.667}, {"type": "recall_at_5", "value": 87.394}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.71980198019801}, {"type": "cos_sim_ap", "value": 92.81616007802704}, {"type": "cos_sim_f1", "value": 85.17548454688318}, {"type": "cos_sim_precision", "value": 89.43894389438944}, {"type": "cos_sim_recall", "value": 81.3}, {"type": "dot_accuracy", "value": 99.71980198019801}, {"type": "dot_ap", "value": 92.81398760591358}, {"type": "dot_f1", "value": 85.17548454688318}, {"type": "dot_precision", "value": 89.43894389438944}, {"type": "dot_recall", "value": 81.3}, {"type": "euclidean_accuracy", "value": 99.71980198019801}, {"type": "euclidean_ap", "value": 92.81560637245072}, {"type": "euclidean_f1", "value": 85.17548454688318}, {"type": "euclidean_precision", "value": 89.43894389438944}, {"type": "euclidean_recall", "value": 81.3}, {"type": "manhattan_accuracy", "value": 99.73069306930694}, {"type": "manhattan_ap", "value": 93.14005487480794}, {"type": "manhattan_f1", "value": 85.56263269639068}, {"type": "manhattan_precision", "value": 91.17647058823529}, {"type": "manhattan_recall", "value": 80.60000000000001}, {"type": "max_accuracy", "value": 99.73069306930694}, {"type": "max_ap", "value": 93.14005487480794}, {"type": "max_f1", "value": 85.56263269639068}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 79.86443362395185}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 49.40897096662564}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 55.66040806627947}, {"type": "mrr", "value": 56.58670475766064}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 31.51015090598575}, {"type": "cos_sim_spearman", "value": 31.35016454939226}, {"type": "dot_pearson", "value": 31.5150068731}, {"type": "dot_spearman", "value": 31.34790869023487}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "mteb/trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.254}, {"type": "map_at_10", "value": 2.064}, {"type": "map_at_100", "value": 12.909}, {"type": "map_at_1000", "value": 31.761}, {"type": "map_at_3", "value": 0.738}, {"type": "map_at_5", "value": 1.155}, {"type": "mrr_at_1", "value": 96.0}, {"type": "mrr_at_10", "value": 98.0}, {"type": "mrr_at_100", "value": 98.0}, {"type": "mrr_at_1000", "value": 98.0}, {"type": "mrr_at_3", "value": 98.0}, {"type": "mrr_at_5", "value": 98.0}, {"type": "ndcg_at_1", "value": 93.0}, {"type": "ndcg_at_10", "value": 82.258}, {"type": "ndcg_at_100", "value": 64.34}, {"type": "ndcg_at_1000", "value": 57.912}, {"type": "ndcg_at_3", "value": 90.827}, {"type": "ndcg_at_5", "value": 86.79}, {"type": "precision_at_1", "value": 96.0}, {"type": "precision_at_10", "value": 84.8}, {"type": "precision_at_100", "value": 66.0}, {"type": "precision_at_1000", "value": 25.356}, {"type": "precision_at_3", "value": 94.667}, {"type": "precision_at_5", "value": 90.4}, {"type": "recall_at_1", "value": 0.254}, {"type": "recall_at_10", "value": 2.1950000000000003}, {"type": "recall_at_100", "value": 16.088}, {"type": "recall_at_1000", "value": 54.559000000000005}, {"type": "recall_at_3", "value": 0.75}, {"type": "recall_at_5", "value": 1.191}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "mteb/touche2020", "config": "default", "split": "test", "revision": "a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f"}, "metrics": [{"type": "map_at_1", "value": 2.976}, {"type": "map_at_10", "value": 11.389000000000001}, {"type": "map_at_100", "value": 18.429000000000002}, {"type": "map_at_1000", "value": 20.113}, {"type": "map_at_3", "value": 6.483}, {"type": "map_at_5", "value": 8.770999999999999}, {"type": "mrr_at_1", "value": 40.816}, {"type": "mrr_at_10", "value": 58.118}, {"type": "mrr_at_100", "value": 58.489999999999995}, {"type": "mrr_at_1000", "value": 58.489999999999995}, {"type": "mrr_at_3", "value": 53.061}, {"type": "mrr_at_5", "value": 57.041}, {"type": "ndcg_at_1", "value": 40.816}, {"type": "ndcg_at_10", "value": 30.567}, {"type": "ndcg_at_100", "value": 42.44}, {"type": "ndcg_at_1000", "value": 53.480000000000004}, {"type": "ndcg_at_3", "value": 36.016}, {"type": "ndcg_at_5", "value": 34.257}, {"type": "precision_at_1", "value": 42.857}, {"type": "precision_at_10", "value": 25.714}, {"type": "precision_at_100", "value": 8.429}, {"type": "precision_at_1000", "value": 1.5939999999999999}, {"type": "precision_at_3", "value": 36.735}, {"type": "precision_at_5", "value": 33.878}, {"type": "recall_at_1", "value": 2.976}, {"type": "recall_at_10", "value": 17.854999999999997}, {"type": "recall_at_100", "value": 51.833}, {"type": "recall_at_1000", "value": 86.223}, {"type": "recall_at_3", "value": 7.887}, {"type": "recall_at_5", "value": 12.026}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 85.1174}, {"type": "ap", "value": 30.169441069345748}, {"type": "f1", "value": 69.79254701873245}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 72.58347481607245}, {"type": "f1", "value": 72.74877295564937}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 53.90586138221305}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 87.35769207844072}, {"type": "cos_sim_ap", "value": 77.9645072410354}, {"type": "cos_sim_f1", "value": 71.32352941176471}, {"type": "cos_sim_precision", "value": 66.5903890160183}, {"type": "cos_sim_recall", "value": 76.78100263852242}, {"type": "dot_accuracy", "value": 87.37557370209214}, {"type": "dot_ap", "value": 77.96250046429908}, {"type": "dot_f1", "value": 71.28932757557064}, {"type": "dot_precision", "value": 66.95249130938586}, {"type": "dot_recall", "value": 76.22691292875989}, {"type": "euclidean_accuracy", "value": 87.35173153722357}, {"type": "euclidean_ap", "value": 77.96520460741593}, {"type": "euclidean_f1", "value": 71.32470733210104}, {"type": "euclidean_precision", "value": 66.91329479768785}, {"type": "euclidean_recall", "value": 76.35883905013192}, {"type": "manhattan_accuracy", "value": 87.25636287774931}, {"type": "manhattan_ap", "value": 77.77752485611796}, {"type": "manhattan_f1", "value": 71.18148599269183}, {"type": "manhattan_precision", "value": 66.10859728506787}, {"type": "manhattan_recall", "value": 77.0976253298153}, {"type": "max_accuracy", "value": 87.37557370209214}, {"type": "max_ap", "value": 77.96520460741593}, {"type": "max_f1", "value": 71.32470733210104}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.38176737687739}, {"type": "cos_sim_ap", "value": 86.58811861657401}, {"type": "cos_sim_f1", "value": 79.09430644097604}, {"type": "cos_sim_precision", "value": 75.45085977911366}, {"type": "cos_sim_recall", "value": 83.10748383122882}, {"type": "dot_accuracy", "value": 89.38370784336554}, {"type": "dot_ap", "value": 86.58840606004333}, {"type": "dot_f1", "value": 79.10179860068133}, {"type": "dot_precision", "value": 75.44546153308643}, {"type": "dot_recall", "value": 83.13058207576223}, {"type": "euclidean_accuracy", "value": 89.38564830985369}, {"type": "euclidean_ap", "value": 86.58820721061164}, {"type": "euclidean_f1", "value": 79.09070942235888}, {"type": "euclidean_precision", "value": 75.38729937194697}, {"type": "euclidean_recall", "value": 83.17677856482906}, {"type": "manhattan_accuracy", "value": 89.40699344122326}, {"type": "manhattan_ap", "value": 86.60631843011362}, {"type": "manhattan_f1", "value": 79.14949970570925}, {"type": "manhattan_precision", "value": 75.78191039729502}, {"type": "manhattan_recall", "value": 82.83030489682784}, {"type": "max_accuracy", "value": 89.40699344122326}, {"type": "max_ap", "value": 86.60631843011362}, {"type": "max_f1", "value": 79.14949970570925}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB AFQMC", "type": "C-MTEB/AFQMC", "config": "default", "split": "validation", "revision": "b44c3b011063adb25877c13823db83bb193913c4"}, "metrics": [{"type": "cos_sim_pearson", "value": 65.58442135663871}, {"type": "cos_sim_spearman", "value": 72.2538631361313}, {"type": "euclidean_pearson", "value": 70.97255486607429}, {"type": "euclidean_spearman", "value": 72.25374250228647}, {"type": "manhattan_pearson", "value": 70.83250199989911}, {"type": "manhattan_spearman", "value": 72.14819496536272}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB ATEC", "type": "C-MTEB/ATEC", "config": "default", "split": "test", "revision": "0f319b1142f28d00e055a6770f3f726ae9b7d865"}, "metrics": [{"type": "cos_sim_pearson", "value": 59.99478404929932}, {"type": "cos_sim_spearman", "value": 62.61836216999812}, {"type": "euclidean_pearson", "value": 66.86429811933593}, {"type": "euclidean_spearman", "value": 62.6183520374191}, {"type": "manhattan_pearson", "value": 66.8063778911633}, {"type": "manhattan_spearman", "value": 62.569607573241115}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (zh)", "type": "mteb/amazon_reviews_multi", "config": "zh", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 53.98400000000001}, {"type": "f1", "value": 51.21447361350723}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BQ", "type": "C-MTEB/BQ", "config": "default", "split": "test", "revision": "e3dda5e115e487b39ec7e618c0c6a29137052a55"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.11941660686553}, {"type": "cos_sim_spearman", "value": 81.25029594540435}, {"type": "euclidean_pearson", "value": 82.06973504238826}, {"type": "euclidean_spearman", "value": 81.2501989488524}, {"type": "manhattan_pearson", "value": 82.10094630392753}, {"type": "manhattan_spearman", "value": 81.27987244392389}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB CLSClusteringP2P", "type": "C-MTEB/CLSClusteringP2P", "config": "default", "split": "test", "revision": "4b6227591c6c1a73bc76b1055f3b7f3588e72476"}, "metrics": [{"type": "v_measure", "value": 47.07270168705156}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB CLSClusteringS2S", "type": "C-MTEB/CLSClusteringS2S", "config": "default", "split": "test", "revision": "e458b3f5414b62b7f9f83499ac1f5497ae2e869f"}, "metrics": [{"type": "v_measure", "value": 45.98511703185043}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB CMedQAv1", "type": "C-MTEB/CMedQAv1-reranking", "config": "default", "split": "test", "revision": "8d7f1e942507dac42dc58017c1a001c3717da7df"}, "metrics": [{"type": "map", "value": 88.19895157194931}, {"type": "mrr", "value": 90.21424603174603}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB CMedQAv2", "type": "C-MTEB/CMedQAv2-reranking", "config": "default", "split": "test", "revision": "23d186750531a14a0357ca22cd92d712fd512ea0"}, "metrics": [{"type": "map", "value": 88.03317320980119}, {"type": "mrr", "value": 89.9461507936508}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CmedqaRetrieval", "type": "C-MTEB/CmedqaRetrieval", "config": "default", "split": "dev", "revision": "cd540c506dae1cf9e9a59c3e06f42030d54e7301"}, "metrics": [{"type": "map_at_1", "value": 29.037000000000003}, {"type": "map_at_10", "value": 42.001}, {"type": "map_at_100", "value": 43.773}, {"type": "map_at_1000", "value": 43.878}, {"type": "map_at_3", "value": 37.637}, {"type": "map_at_5", "value": 40.034}, {"type": "mrr_at_1", "value": 43.136}, {"type": "mrr_at_10", "value": 51.158}, {"type": "mrr_at_100", "value": 52.083}, {"type": "mrr_at_1000", "value": 52.12}, {"type": "mrr_at_3", "value": 48.733}, {"type": "mrr_at_5", "value": 50.025}, {"type": "ndcg_at_1", "value": 43.136}, {"type": "ndcg_at_10", "value": 48.685}, {"type": "ndcg_at_100", "value": 55.513}, {"type": "ndcg_at_1000", "value": 57.242000000000004}, {"type": "ndcg_at_3", "value": 43.329}, {"type": "ndcg_at_5", "value": 45.438}, {"type": "precision_at_1", "value": 43.136}, {"type": "precision_at_10", "value": 10.56}, {"type": "precision_at_100", "value": 1.6129999999999998}, {"type": "precision_at_1000", "value": 0.184}, {"type": "precision_at_3", "value": 24.064}, {"type": "precision_at_5", "value": 17.269000000000002}, {"type": "recall_at_1", "value": 29.037000000000003}, {"type": "recall_at_10", "value": 59.245000000000005}, {"type": "recall_at_100", "value": 87.355}, {"type": "recall_at_1000", "value": 98.74000000000001}, {"type": "recall_at_3", "value": 42.99}, {"type": "recall_at_5", "value": 49.681999999999995}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB Cmnli", "type": "C-MTEB/CMNLI", "config": "default", "split": "validation", "revision": "41bc36f332156f7adc9e38f53777c959b2ae9766"}, "metrics": [{"type": "cos_sim_accuracy", "value": 82.68190018039687}, {"type": "cos_sim_ap", "value": 90.18017125327886}, {"type": "cos_sim_f1", "value": 83.64080906868193}, {"type": "cos_sim_precision", "value": 79.7076890489303}, {"type": "cos_sim_recall", "value": 87.98223053542202}, {"type": "dot_accuracy", "value": 82.68190018039687}, {"type": "dot_ap", "value": 90.18782350103646}, {"type": "dot_f1", "value": 83.64242087729039}, {"type": "dot_precision", "value": 79.65313028764805}, {"type": "dot_recall", "value": 88.05237315875614}, {"type": "euclidean_accuracy", "value": 82.68190018039687}, {"type": "euclidean_ap", "value": 90.1801957900632}, {"type": "euclidean_f1", "value": 83.63636363636364}, {"type": "euclidean_precision", "value": 79.52772506852203}, {"type": "euclidean_recall", "value": 88.19265840542437}, {"type": "manhattan_accuracy", "value": 82.14070956103427}, {"type": "manhattan_ap", "value": 89.96178420101427}, {"type": "manhattan_f1", "value": 83.21087838578791}, {"type": "manhattan_precision", "value": 78.35605121850475}, {"type": "manhattan_recall", "value": 88.70703764320785}, {"type": "max_accuracy", "value": 82.68190018039687}, {"type": "max_ap", "value": 90.18782350103646}, {"type": "max_f1", "value": 83.64242087729039}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CovidRetrieval", "type": "C-MTEB/CovidRetrieval", "config": "default", "split": "dev", "revision": "1271c7809071a13532e05f25fb53511ffce77117"}, "metrics": [{"type": "map_at_1", "value": 72.234}, {"type": "map_at_10", "value": 80.10000000000001}, {"type": "map_at_100", "value": 80.36}, {"type": "map_at_1000", "value": 80.363}, {"type": "map_at_3", "value": 78.315}, {"type": "map_at_5", "value": 79.607}, {"type": "mrr_at_1", "value": 72.392}, {"type": "mrr_at_10", "value": 80.117}, {"type": "mrr_at_100", "value": 80.36999999999999}, {"type": "mrr_at_1000", "value": 80.373}, {"type": "mrr_at_3", "value": 78.469}, {"type": "mrr_at_5", "value": 79.633}, {"type": "ndcg_at_1", "value": 72.392}, {"type": "ndcg_at_10", "value": 83.651}, {"type": "ndcg_at_100", "value": 84.749}, {"type": "ndcg_at_1000", "value": 84.83000000000001}, {"type": "ndcg_at_3", "value": 80.253}, {"type": "ndcg_at_5", "value": 82.485}, {"type": "precision_at_1", "value": 72.392}, {"type": "precision_at_10", "value": 9.557}, {"type": "precision_at_100", "value": 1.004}, {"type": "precision_at_1000", "value": 0.101}, {"type": "precision_at_3", "value": 28.732000000000003}, {"type": "precision_at_5", "value": 18.377}, {"type": "recall_at_1", "value": 72.234}, {"type": "recall_at_10", "value": 94.573}, {"type": "recall_at_100", "value": 99.368}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 85.669}, {"type": "recall_at_5", "value": 91.01700000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DuRetrieval", "type": "C-MTEB/DuRetrieval", "config": "default", "split": "dev", "revision": "a1a333e290fe30b10f3f56498e3a0d911a693ced"}, "metrics": [{"type": "map_at_1", "value": 26.173999999999996}, {"type": "map_at_10", "value": 80.04}, {"type": "map_at_100", "value": 82.94500000000001}, {"type": "map_at_1000", "value": 82.98100000000001}, {"type": "map_at_3", "value": 55.562999999999995}, {"type": "map_at_5", "value": 69.89800000000001}, {"type": "mrr_at_1", "value": 89.5}, {"type": "mrr_at_10", "value": 92.996}, {"type": "mrr_at_100", "value": 93.06400000000001}, {"type": "mrr_at_1000", "value": 93.065}, {"type": "mrr_at_3", "value": 92.658}, {"type": "mrr_at_5", "value": 92.84599999999999}, {"type": "ndcg_at_1", "value": 89.5}, {"type": "ndcg_at_10", "value": 87.443}, {"type": "ndcg_at_100", "value": 90.253}, {"type": "ndcg_at_1000", "value": 90.549}, {"type": "ndcg_at_3", "value": 85.874}, {"type": "ndcg_at_5", "value": 84.842}, {"type": "precision_at_1", "value": 89.5}, {"type": "precision_at_10", "value": 41.805}, {"type": "precision_at_100", "value": 4.827}, {"type": "precision_at_1000", "value": 0.49}, {"type": "precision_at_3", "value": 76.85}, {"type": "precision_at_5", "value": 64.8}, {"type": "recall_at_1", "value": 26.173999999999996}, {"type": "recall_at_10", "value": 89.101}, {"type": "recall_at_100", "value": 98.08099999999999}, {"type": "recall_at_1000", "value": 99.529}, {"type": "recall_at_3", "value": 57.902}, {"type": "recall_at_5", "value": 74.602}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB EcomRetrieval", "type": "C-MTEB/EcomRetrieval", "config": "default", "split": "dev", "revision": "687de13dc7294d6fd9be10c6945f9e8fec8166b9"}, "metrics": [{"type": "map_at_1", "value": 56.10000000000001}, {"type": "map_at_10", "value": 66.15299999999999}, {"type": "map_at_100", "value": 66.625}, {"type": "map_at_1000", "value": 66.636}, {"type": "map_at_3", "value": 63.632999999999996}, {"type": "map_at_5", "value": 65.293}, {"type": "mrr_at_1", "value": 56.10000000000001}, {"type": "mrr_at_10", "value": 66.15299999999999}, {"type": "mrr_at_100", "value": 66.625}, {"type": "mrr_at_1000", "value": 66.636}, {"type": "mrr_at_3", "value": 63.632999999999996}, {"type": "mrr_at_5", "value": 65.293}, {"type": "ndcg_at_1", "value": 56.10000000000001}, {"type": "ndcg_at_10", "value": 71.146}, {"type": "ndcg_at_100", "value": 73.27799999999999}, {"type": "ndcg_at_1000", "value": 73.529}, {"type": "ndcg_at_3", "value": 66.09}, {"type": "ndcg_at_5", "value": 69.08999999999999}, {"type": "precision_at_1", "value": 56.10000000000001}, {"type": "precision_at_10", "value": 8.68}, {"type": "precision_at_100", "value": 0.964}, {"type": "precision_at_1000", "value": 0.098}, {"type": "precision_at_3", "value": 24.4}, {"type": "precision_at_5", "value": 16.1}, {"type": "recall_at_1", "value": 56.10000000000001}, {"type": "recall_at_10", "value": 86.8}, {"type": "recall_at_100", "value": 96.39999999999999}, {"type": "recall_at_1000", "value": 98.3}, {"type": "recall_at_3", "value": 73.2}, {"type": "recall_at_5", "value": 80.5}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB IFlyTek", "type": "C-MTEB/IFlyTek-classification", "config": "default", "split": "validation", "revision": "421605374b29664c5fc098418fe20ada9bd55f8a"}, "metrics": [{"type": "accuracy", "value": 54.52096960369373}, {"type": "f1", "value": 40.930845295808695}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB JDReview", "type": "C-MTEB/JDReview-classification", "config": "default", "split": "test", "revision": "b7c64bd89eb87f8ded463478346f76731f07bf8b"}, "metrics": [{"type": "accuracy", "value": 86.51031894934334}, {"type": "ap", "value": 55.9516014323483}, {"type": "f1", "value": 81.54813679326381}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB LCQMC", "type": "C-MTEB/LCQMC", "config": "default", "split": "test", "revision": "17f9b096f80380fce5ed12a9be8be7784b337daf"}, "metrics": [{"type": "cos_sim_pearson", "value": 69.67437838574276}, {"type": "cos_sim_spearman", "value": 73.81314174653045}, {"type": "euclidean_pearson", "value": 72.63430276680275}, {"type": "euclidean_spearman", "value": 73.81358736777001}, {"type": "manhattan_pearson", "value": 72.58743833842829}, {"type": "manhattan_spearman", "value": 73.7590419009179}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MMarcoReranking", "type": "C-MTEB/Mmarco-reranking", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map", "value": 31.648613483640254}, {"type": "mrr", "value": 30.37420634920635}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MMarcoRetrieval", "type": "C-MTEB/MMarcoRetrieval", "config": "default", "split": "dev", "revision": "539bbde593d947e2a124ba72651aafc09eb33fc2"}, "metrics": [{"type": "map_at_1", "value": 73.28099999999999}, {"type": "map_at_10", "value": 81.977}, {"type": "map_at_100", "value": 82.222}, {"type": "map_at_1000", "value": 82.22699999999999}, {"type": "map_at_3", "value": 80.441}, {"type": "map_at_5", "value": 81.46600000000001}, {"type": "mrr_at_1", "value": 75.673}, {"type": "mrr_at_10", "value": 82.41000000000001}, {"type": "mrr_at_100", "value": 82.616}, {"type": "mrr_at_1000", "value": 82.621}, {"type": "mrr_at_3", "value": 81.094}, {"type": "mrr_at_5", "value": 81.962}, {"type": "ndcg_at_1", "value": 75.673}, {"type": "ndcg_at_10", "value": 85.15599999999999}, {"type": "ndcg_at_100", "value": 86.151}, {"type": "ndcg_at_1000", "value": 86.26899999999999}, {"type": "ndcg_at_3", "value": 82.304}, {"type": "ndcg_at_5", "value": 84.009}, {"type": "precision_at_1", "value": 75.673}, {"type": "precision_at_10", "value": 10.042}, {"type": "precision_at_100", "value": 1.052}, {"type": "precision_at_1000", "value": 0.106}, {"type": "precision_at_3", "value": 30.673000000000002}, {"type": "precision_at_5", "value": 19.326999999999998}, {"type": "recall_at_1", "value": 73.28099999999999}, {"type": "recall_at_10", "value": 94.446}, {"type": "recall_at_100", "value": 98.737}, {"type": "recall_at_1000", "value": 99.649}, {"type": "recall_at_3", "value": 86.984}, {"type": "recall_at_5", "value": 91.024}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-CN)", "type": "mteb/amazon_massive_intent", "config": "zh-CN", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 81.08607935440484}, {"type": "f1", "value": 78.24879986066307}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-CN)", "type": "mteb/amazon_massive_scenario", "config": "zh-CN", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 86.05917955615332}, {"type": "f1", "value": 85.05279279434997}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MedicalRetrieval", "type": "C-MTEB/MedicalRetrieval", "config": "default", "split": "dev", "revision": "2039188fb5800a9803ba5048df7b76e6fb151fc6"}, "metrics": [{"type": "map_at_1", "value": 56.2}, {"type": "map_at_10", "value": 62.57899999999999}, {"type": "map_at_100", "value": 63.154999999999994}, {"type": "map_at_1000", "value": 63.193}, {"type": "map_at_3", "value": 61.217}, {"type": "map_at_5", "value": 62.012}, {"type": "mrr_at_1", "value": 56.3}, {"type": "mrr_at_10", "value": 62.629000000000005}, {"type": "mrr_at_100", "value": 63.205999999999996}, {"type": "mrr_at_1000", "value": 63.244}, {"type": "mrr_at_3", "value": 61.267}, {"type": "mrr_at_5", "value": 62.062}, {"type": "ndcg_at_1", "value": 56.2}, {"type": "ndcg_at_10", "value": 65.592}, {"type": "ndcg_at_100", "value": 68.657}, {"type": "ndcg_at_1000", "value": 69.671}, {"type": "ndcg_at_3", "value": 62.808}, {"type": "ndcg_at_5", "value": 64.24499999999999}, {"type": "precision_at_1", "value": 56.2}, {"type": "precision_at_10", "value": 7.5}, {"type": "precision_at_100", "value": 0.899}, {"type": "precision_at_1000", "value": 0.098}, {"type": "precision_at_3", "value": 22.467000000000002}, {"type": "precision_at_5", "value": 14.180000000000001}, {"type": "recall_at_1", "value": 56.2}, {"type": "recall_at_10", "value": 75.0}, {"type": "recall_at_100", "value": 89.9}, {"type": "recall_at_1000", "value": 97.89999999999999}, {"type": "recall_at_3", "value": 67.4}, {"type": "recall_at_5", "value": 70.89999999999999}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MultilingualSentiment", "type": "C-MTEB/MultilingualSentiment-classification", "config": "default", "split": "validation", "revision": "46958b007a63fdbf239b7672c25d0bea67b5ea1a"}, "metrics": [{"type": "accuracy", "value": 76.87666666666667}, {"type": "f1", "value": 76.7317686219665}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB Ocnli", "type": "C-MTEB/OCNLI", "config": "default", "split": "validation", "revision": "66e76a618a34d6d565d5538088562851e6daa7ec"}, "metrics": [{"type": "cos_sim_accuracy", "value": 79.64266377910124}, {"type": "cos_sim_ap", "value": 84.78274442344829}, {"type": "cos_sim_f1", "value": 81.16947472745292}, {"type": "cos_sim_precision", "value": 76.47058823529412}, {"type": "cos_sim_recall", "value": 86.48363252375924}, {"type": "dot_accuracy", "value": 79.64266377910124}, {"type": "dot_ap", "value": 84.7851404063692}, {"type": "dot_f1", "value": 81.16947472745292}, {"type": "dot_precision", "value": 76.47058823529412}, {"type": "dot_recall", "value": 86.48363252375924}, {"type": "euclidean_accuracy", "value": 79.64266377910124}, {"type": "euclidean_ap", "value": 84.78068373762378}, {"type": "euclidean_f1", "value": 81.14794656110837}, {"type": "euclidean_precision", "value": 76.35009310986965}, {"type": "euclidean_recall", "value": 86.58922914466737}, {"type": "manhattan_accuracy", "value": 79.48023822414727}, {"type": "manhattan_ap", "value": 84.72928897427576}, {"type": "manhattan_f1", "value": 81.32084770823064}, {"type": "manhattan_precision", "value": 76.24768946395564}, {"type": "manhattan_recall", "value": 87.11721224920802}, {"type": "max_accuracy", "value": 79.64266377910124}, {"type": "max_ap", "value": 84.7851404063692}, {"type": "max_f1", "value": 81.32084770823064}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB OnlineShopping", "type": "C-MTEB/OnlineShopping-classification", "config": "default", "split": "test", "revision": "e610f2ebd179a8fda30ae534c3878750a96db120"}, "metrics": [{"type": "accuracy", "value": 94.3}, {"type": "ap", "value": 92.8664032274438}, {"type": "f1", "value": 94.29311102997727}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB PAWSX", "type": "C-MTEB/PAWSX", "config": "default", "split": "test", "revision": "9c6a90e430ac22b5779fb019a23e820b11a8b5e1"}, "metrics": [{"type": "cos_sim_pearson", "value": 48.51392279882909}, {"type": "cos_sim_spearman", "value": 54.06338895994974}, {"type": "euclidean_pearson", "value": 52.58480559573412}, {"type": "euclidean_spearman", "value": 54.06417276612201}, {"type": "manhattan_pearson", "value": 52.69525121721343}, {"type": "manhattan_spearman", "value": 54.048147455389675}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB QBQTC", "type": "C-MTEB/QBQTC", "config": "default", "split": "test", "revision": "790b0510dc52b1553e8c49f3d2afb48c0e5c48b7"}, "metrics": [{"type": "cos_sim_pearson", "value": 29.728387290757325}, {"type": "cos_sim_spearman", "value": 31.366121633635284}, {"type": "euclidean_pearson", "value": 29.14588368552961}, {"type": "euclidean_spearman", "value": 31.36764411112844}, {"type": "manhattan_pearson", "value": 29.63517350523121}, {"type": "manhattan_spearman", "value": 31.94157020583762}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh)", "type": "mteb/sts22-crosslingual-sts", "config": "zh", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 63.64868296271406}, {"type": "cos_sim_spearman", "value": 66.12800618164744}, {"type": "euclidean_pearson", "value": 63.21405767340238}, {"type": "euclidean_spearman", "value": 66.12786567790748}, {"type": "manhattan_pearson", "value": 64.04300276525848}, {"type": "manhattan_spearman", "value": 66.5066857145652}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSB", "type": "C-MTEB/STSB", "config": "default", "split": "test", "revision": "0cde68302b3541bb8b3c340dc0644b0b745b3dc0"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.2302623912794}, {"type": "cos_sim_spearman", "value": 81.16833673266562}, {"type": "euclidean_pearson", "value": 79.47647843876024}, {"type": "euclidean_spearman", "value": 81.16944349524972}, {"type": "manhattan_pearson", "value": 79.84947238492208}, {"type": "manhattan_spearman", "value": 81.64626599410026}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB T2Reranking", "type": "C-MTEB/T2Reranking", "config": "default", "split": "dev", "revision": "76631901a18387f85eaa53e5450019b87ad58ef9"}, "metrics": [{"type": "map", "value": 67.80129586475687}, {"type": "mrr", "value": 77.77402311635554}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB T2Retrieval", "type": "C-MTEB/T2Retrieval", "config": "default", "split": "dev", "revision": "8731a845f1bf500a4f111cf1070785c793d10e64"}, "metrics": [{"type": "map_at_1", "value": 28.666999999999998}, {"type": "map_at_10", "value": 81.063}, {"type": "map_at_100", "value": 84.504}, {"type": "map_at_1000", "value": 84.552}, {"type": "map_at_3", "value": 56.897}, {"type": "map_at_5", "value": 70.073}, {"type": "mrr_at_1", "value": 92.087}, {"type": "mrr_at_10", "value": 94.132}, {"type": "mrr_at_100", "value": 94.19800000000001}, {"type": "mrr_at_1000", "value": 94.19999999999999}, {"type": "mrr_at_3", "value": 93.78999999999999}, {"type": "mrr_at_5", "value": 94.002}, {"type": "ndcg_at_1", "value": 92.087}, {"type": "ndcg_at_10", "value": 87.734}, {"type": "ndcg_at_100", "value": 90.736}, {"type": "ndcg_at_1000", "value": 91.184}, {"type": "ndcg_at_3", "value": 88.78}, {"type": "ndcg_at_5", "value": 87.676}, {"type": "precision_at_1", "value": 92.087}, {"type": "precision_at_10", "value": 43.46}, {"type": "precision_at_100", "value": 5.07}, {"type": "precision_at_1000", "value": 0.518}, {"type": "precision_at_3", "value": 77.49000000000001}, {"type": "precision_at_5", "value": 65.194}, {"type": "recall_at_1", "value": 28.666999999999998}, {"type": "recall_at_10", "value": 86.632}, {"type": "recall_at_100", "value": 96.646}, {"type": "recall_at_1000", "value": 98.917}, {"type": "recall_at_3", "value": 58.333999999999996}, {"type": "recall_at_5", "value": 72.974}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TNews", "type": "C-MTEB/TNews-classification", "config": "default", "split": "validation", "revision": "317f262bf1e6126357bbe89e875451e4b0938fe4"}, "metrics": [{"type": "accuracy", "value": 52.971999999999994}, {"type": "f1", "value": 50.2898280984929}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ThuNewsClusteringP2P", "type": "C-MTEB/ThuNewsClusteringP2P", "config": "default", "split": "test", "revision": "5798586b105c0434e4f0fe5e767abe619442cf93"}, "metrics": [{"type": "v_measure", "value": 86.0797948663824}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ThuNewsClusteringS2S", "type": "C-MTEB/ThuNewsClusteringS2S", "config": "default", "split": "test", "revision": "8a8b2caeda43f39e13c4bc5bea0f8a667896e10d"}, "metrics": [{"type": "v_measure", "value": 85.10759092255017}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB VideoRetrieval", "type": "C-MTEB/VideoRetrieval", "config": "default", "split": "dev", "revision": "58c2597a5943a2ba48f4668c3b90d796283c5639"}, "metrics": [{"type": "map_at_1", "value": 65.60000000000001}, {"type": "map_at_10", "value": 74.773}, {"type": "map_at_100", "value": 75.128}, {"type": "map_at_1000", "value": 75.136}, {"type": "map_at_3", "value": 73.05}, {"type": "map_at_5", "value": 74.13499999999999}, {"type": "mrr_at_1", "value": 65.60000000000001}, {"type": "mrr_at_10", "value": 74.773}, {"type": "mrr_at_100", "value": 75.128}, {"type": "mrr_at_1000", "value": 75.136}, {"type": "mrr_at_3", "value": 73.05}, {"type": "mrr_at_5", "value": 74.13499999999999}, {"type": "ndcg_at_1", "value": 65.60000000000001}, {"type": "ndcg_at_10", "value": 78.84299999999999}, {"type": "ndcg_at_100", "value": 80.40899999999999}, {"type": "ndcg_at_1000", "value": 80.57}, {"type": "ndcg_at_3", "value": 75.40599999999999}, {"type": "ndcg_at_5", "value": 77.351}, {"type": "precision_at_1", "value": 65.60000000000001}, {"type": "precision_at_10", "value": 9.139999999999999}, {"type": "precision_at_100", "value": 0.984}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 27.400000000000002}, {"type": "precision_at_5", "value": 17.380000000000003}, {"type": "recall_at_1", "value": 65.60000000000001}, {"type": "recall_at_10", "value": 91.4}, {"type": "recall_at_100", "value": 98.4}, {"type": "recall_at_1000", "value": 99.6}, {"type": "recall_at_3", "value": 82.19999999999999}, {"type": "recall_at_5", "value": 86.9}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Waimai", "type": "C-MTEB/waimai-classification", "config": "default", "split": "test", "revision": "339287def212450dcaa9df8c22bf93e9980c7023"}, "metrics": [{"type": "accuracy", "value": 89.47}, {"type": "ap", "value": 75.59561751845389}, {"type": "f1", "value": 87.95207751382563}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB AlloProfClusteringP2P", "type": "lyon-nlp/alloprof", "config": "default", "split": "test", "revision": "392ba3f5bcc8c51f578786c1fc3dae648662cb9b"}, "metrics": [{"type": "v_measure", "value": 76.05592323841036}, {"type": "v_measure", "value": 64.51718058866508}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AlloprofReranking", "type": "lyon-nlp/mteb-fr-reranking-alloprof-s2p", "config": "default", "split": "test", "revision": "666fdacebe0291776e86f29345663dfaf80a0db9"}, "metrics": [{"type": "map", "value": 73.08278490943373}, {"type": "mrr", "value": 74.66561454570449}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB AlloprofRetrieval", "type": "lyon-nlp/alloprof", "config": "default", "split": "test", "revision": "392ba3f5bcc8c51f578786c1fc3dae648662cb9b"}, "metrics": [{"type": "map_at_1", "value": 38.912}, {"type": "map_at_10", "value": 52.437999999999995}, {"type": "map_at_100", "value": 53.38}, {"type": "map_at_1000", "value": 53.427}, {"type": "map_at_3", "value": 48.879}, {"type": "map_at_5", "value": 50.934000000000005}, {"type": "mrr_at_1", "value": 44.085}, {"type": "mrr_at_10", "value": 55.337}, {"type": "mrr_at_100", "value": 56.016999999999996}, {"type": "mrr_at_1000", "value": 56.043}, {"type": "mrr_at_3", "value": 52.55499999999999}, {"type": "mrr_at_5", "value": 54.20399999999999}, {"type": "ndcg_at_1", "value": 44.085}, {"type": "ndcg_at_10", "value": 58.876}, {"type": "ndcg_at_100", "value": 62.714000000000006}, {"type": "ndcg_at_1000", "value": 63.721000000000004}, {"type": "ndcg_at_3", "value": 52.444}, {"type": "ndcg_at_5", "value": 55.692}, {"type": "precision_at_1", "value": 44.085}, {"type": "precision_at_10", "value": 9.21}, {"type": "precision_at_100", "value": 1.164}, {"type": "precision_at_1000", "value": 0.128}, {"type": "precision_at_3", "value": 23.043}, {"type": "precision_at_5", "value": 15.898000000000001}, {"type": "recall_at_1", "value": 38.912}, {"type": "recall_at_10", "value": 75.577}, {"type": "recall_at_100", "value": 92.038}, {"type": "recall_at_1000", "value": 99.325}, {"type": "recall_at_3", "value": 58.592}, {"type": "recall_at_5", "value": 66.235}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 55.532000000000004}, {"type": "f1", "value": 52.5783943471605}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB BSARDRetrieval", "type": "maastrichtlawtech/bsard", "config": "default", "split": "test", "revision": "5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59"}, "metrics": [{"type": "map_at_1", "value": 8.108}, {"type": "map_at_10", "value": 14.710999999999999}, {"type": "map_at_100", "value": 15.891}, {"type": "map_at_1000", "value": 15.983}, {"type": "map_at_3", "value": 12.237}, {"type": "map_at_5", "value": 13.679}, {"type": "mrr_at_1", "value": 8.108}, {"type": "mrr_at_10", "value": 14.710999999999999}, {"type": "mrr_at_100", "value": 15.891}, {"type": "mrr_at_1000", "value": 15.983}, {"type": "mrr_at_3", "value": 12.237}, {"type": "mrr_at_5", "value": 13.679}, {"type": "ndcg_at_1", "value": 8.108}, {"type": "ndcg_at_10", "value": 18.796}, {"type": "ndcg_at_100", "value": 25.098}, {"type": "ndcg_at_1000", "value": 27.951999999999998}, {"type": "ndcg_at_3", "value": 13.712}, {"type": "ndcg_at_5", "value": 16.309}, {"type": "precision_at_1", "value": 8.108}, {"type": "precision_at_10", "value": 3.198}, {"type": "precision_at_100", "value": 0.626}, {"type": "precision_at_1000", "value": 0.086}, {"type": "precision_at_3", "value": 6.006}, {"type": "precision_at_5", "value": 4.865}, {"type": "recall_at_1", "value": 8.108}, {"type": "recall_at_10", "value": 31.982}, {"type": "recall_at_100", "value": 62.613}, {"type": "recall_at_1000", "value": 86.036}, {"type": "recall_at_3", "value": 18.018}, {"type": "recall_at_5", "value": 24.324}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB HALClusteringS2S", "type": "lyon-nlp/clustering-hal-s2s", "config": "default", "split": "test", "revision": "e06ebbbb123f8144bef1a5d18796f3dec9ae2915"}, "metrics": [{"type": "v_measure", "value": 30.833269778867116}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MLSUMClusteringP2P", "type": "mlsum", "config": "default", "split": "test", "revision": "b5d54f8f3b61ae17845046286940f03c6bc79bc7"}, "metrics": [{"type": "v_measure", "value": 50.0281928004713}, {"type": "v_measure", "value": 43.699961510636534}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 96.68963357344191}, {"type": "f1", "value": 96.45175170820961}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 87.46946445349202}, {"type": "f1", "value": 65.79860440988624}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MasakhaNEWSClassification (fra)", "type": "masakhane/masakhanews", "config": "fra", "split": "test", "revision": "8ccc72e69e65f40c70e117d8b3c08306bb788b60"}, "metrics": [{"type": "accuracy", "value": 82.60663507109005}, {"type": "f1", "value": 77.20462646604777}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MasakhaNEWSClusteringP2P (fra)", "type": "masakhane/masakhanews", "config": "fra", "split": "test", "revision": "8ccc72e69e65f40c70e117d8b3c08306bb788b60"}, "metrics": [{"type": "v_measure", "value": 60.19311264967803}, {"type": "v_measure", "value": 63.6235764409785}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fr)", "type": "mteb/amazon_massive_intent", "config": "fr", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 81.65097511768661}, {"type": "f1", "value": 78.77796091490924}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fr)", "type": "mteb/amazon_massive_scenario", "config": "fr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 86.64425016812373}, {"type": "f1", "value": 85.4912728670017}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MintakaRetrieval (fr)", "type": "jinaai/mintakaqa", "config": "fr", "split": "test", "revision": "efa78cc2f74bbcd21eff2261f9e13aebe40b814e"}, "metrics": [{"type": "map_at_1", "value": 35.913000000000004}, {"type": "map_at_10", "value": 48.147}, {"type": "map_at_100", "value": 48.91}, {"type": "map_at_1000", "value": 48.949}, {"type": "map_at_3", "value": 45.269999999999996}, {"type": "map_at_5", "value": 47.115}, {"type": "mrr_at_1", "value": 35.913000000000004}, {"type": "mrr_at_10", "value": 48.147}, {"type": "mrr_at_100", "value": 48.91}, {"type": "mrr_at_1000", "value": 48.949}, {"type": "mrr_at_3", "value": 45.269999999999996}, {"type": "mrr_at_5", "value": 47.115}, {"type": "ndcg_at_1", "value": 35.913000000000004}, {"type": "ndcg_at_10", "value": 54.03}, {"type": "ndcg_at_100", "value": 57.839}, {"type": "ndcg_at_1000", "value": 58.925000000000004}, {"type": "ndcg_at_3", "value": 48.217999999999996}, {"type": "ndcg_at_5", "value": 51.56699999999999}, {"type": "precision_at_1", "value": 35.913000000000004}, {"type": "precision_at_10", "value": 7.244000000000001}, {"type": "precision_at_100", "value": 0.9039999999999999}, {"type": "precision_at_1000", "value": 0.099}, {"type": "precision_at_3", "value": 18.905}, {"type": "precision_at_5", "value": 12.981000000000002}, {"type": "recall_at_1", "value": 35.913000000000004}, {"type": "recall_at_10", "value": 72.441}, {"type": "recall_at_100", "value": 90.41799999999999}, {"type": "recall_at_1000", "value": 99.099}, {"type": "recall_at_3", "value": 56.716}, {"type": "recall_at_5", "value": 64.90599999999999}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB OpusparcusPC (fr)", "type": "GEM/opusparcus", "config": "fr", "split": "test", "revision": "9e9b1f8ef51616073f47f306f7f47dd91663f86a"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.90069513406156}, {"type": "cos_sim_ap", "value": 100.0}, {"type": "cos_sim_f1", "value": 99.95032290114257}, {"type": "cos_sim_precision", "value": 100.0}, {"type": "cos_sim_recall", "value": 99.90069513406156}, {"type": "dot_accuracy", "value": 99.90069513406156}, {"type": "dot_ap", "value": 100.0}, {"type": "dot_f1", "value": 99.95032290114257}, {"type": "dot_precision", "value": 100.0}, {"type": "dot_recall", "value": 99.90069513406156}, {"type": "euclidean_accuracy", "value": 99.90069513406156}, {"type": "euclidean_ap", "value": 100.0}, {"type": "euclidean_f1", "value": 99.95032290114257}, {"type": "euclidean_precision", "value": 100.0}, {"type": "euclidean_recall", "value": 99.90069513406156}, {"type": "manhattan_accuracy", "value": 99.90069513406156}, {"type": "manhattan_ap", "value": 100.0}, {"type": "manhattan_f1", "value": 99.95032290114257}, {"type": "manhattan_precision", "value": 100.0}, {"type": "manhattan_recall", "value": 99.90069513406156}, {"type": "max_accuracy", "value": 99.90069513406156}, {"type": "max_ap", "value": 100.0}, {"type": "max_f1", "value": 99.95032290114257}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB PawsX (fr)", "type": "paws-x", "config": "fr", "split": "test", "revision": "8a04d940a42cd40658986fdd8e3da561533a3646"}, "metrics": [{"type": "cos_sim_accuracy", "value": 75.25}, {"type": "cos_sim_ap", "value": 80.86376001270014}, {"type": "cos_sim_f1", "value": 73.65945437441204}, {"type": "cos_sim_precision", "value": 64.02289452166802}, {"type": "cos_sim_recall", "value": 86.71096345514951}, {"type": "dot_accuracy", "value": 75.25}, {"type": "dot_ap", "value": 80.93686107633002}, {"type": "dot_f1", "value": 73.65945437441204}, {"type": "dot_precision", "value": 64.02289452166802}, {"type": "dot_recall", "value": 86.71096345514951}, {"type": "euclidean_accuracy", "value": 75.25}, {"type": "euclidean_ap", "value": 80.86379136218862}, {"type": "euclidean_f1", "value": 73.65945437441204}, {"type": "euclidean_precision", "value": 64.02289452166802}, {"type": "euclidean_recall", "value": 86.71096345514951}, {"type": "manhattan_accuracy", "value": 75.3}, {"type": "manhattan_ap", "value": 80.87826606097734}, {"type": "manhattan_f1", "value": 73.68421052631581}, {"type": "manhattan_precision", "value": 64.0}, {"type": "manhattan_recall", "value": 86.82170542635659}, {"type": "max_accuracy", "value": 75.3}, {"type": "max_ap", "value": 80.93686107633002}, {"type": "max_f1", "value": 73.68421052631581}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICKFr", "type": "Lajavaness/SICK-fr", "config": "default", "split": "test", "revision": "e077ab4cf4774a1e36d86d593b150422fafd8e8a"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.42349425981143}, {"type": "cos_sim_spearman", "value": 78.90454327031226}, {"type": "euclidean_pearson", "value": 78.39086497435166}, {"type": "euclidean_spearman", "value": 78.9046133980509}, {"type": "manhattan_pearson", "value": 78.63743094286502}, {"type": "manhattan_spearman", "value": 79.12136348449269}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr)", "type": "mteb/sts22-crosslingual-sts", "config": "fr", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.452697919749}, {"type": "cos_sim_spearman", "value": 82.58116836039301}, {"type": "euclidean_pearson", "value": 81.04038478932786}, {"type": "euclidean_spearman", "value": 82.58116836039301}, {"type": "manhattan_pearson", "value": 81.37075396187771}, {"type": "manhattan_spearman", "value": 82.73678231355368}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (fr)", "type": "stsb_multi_mt", "config": "fr", "split": "test", "revision": "93d57ef91790589e3ce9c365164337a8a78b7632"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.7419764013806}, {"type": "cos_sim_spearman", "value": 85.46085808849622}, {"type": "euclidean_pearson", "value": 83.70449639870063}, {"type": "euclidean_spearman", "value": 85.46159013076233}, {"type": "manhattan_pearson", "value": 83.95259510313929}, {"type": "manhattan_spearman", "value": 85.8029724659458}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEvalFr", "type": "lyon-nlp/summarization-summeval-fr-p2p", "config": "default", "split": "test", "revision": "b385812de6a9577b6f4d0f88c6a6e35395a94054"}, "metrics": [{"type": "cos_sim_pearson", "value": 32.61063271753325}, {"type": "cos_sim_spearman", "value": 31.454589417353603}, {"type": "dot_pearson", "value": 32.6106288643431}, {"type": "dot_spearman", "value": 31.454589417353603}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SyntecReranking", "type": "lyon-nlp/mteb-fr-reranking-syntec-s2p", "config": "default", "split": "test", "revision": "b205c5084a0934ce8af14338bf03feb19499c84d"}, "metrics": [{"type": "map", "value": 84.31666666666666}, {"type": "mrr", "value": 84.31666666666666}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SyntecRetrieval", "type": "lyon-nlp/mteb-fr-retrieval-syntec-s2p", "config": "default", "split": "test", "revision": "77f7e271bf4a92b24fce5119f3486b583ca016ff"}, "metrics": [{"type": "map_at_1", "value": 63.0}, {"type": "map_at_10", "value": 73.471}, {"type": "map_at_100", "value": 73.87}, {"type": "map_at_1000", "value": 73.87}, {"type": "map_at_3", "value": 70.5}, {"type": "map_at_5", "value": 73.05}, {"type": "mrr_at_1", "value": 63.0}, {"type": "mrr_at_10", "value": 73.471}, {"type": "mrr_at_100", "value": 73.87}, {"type": "mrr_at_1000", "value": 73.87}, {"type": "mrr_at_3", "value": 70.5}, {"type": "mrr_at_5", "value": 73.05}, {"type": "ndcg_at_1", "value": 63.0}, {"type": "ndcg_at_10", "value": 78.255}, {"type": "ndcg_at_100", "value": 79.88}, {"type": "ndcg_at_1000", "value": 79.88}, {"type": "ndcg_at_3", "value": 72.702}, {"type": "ndcg_at_5", "value": 77.264}, {"type": "precision_at_1", "value": 63.0}, {"type": "precision_at_10", "value": 9.3}, {"type": "precision_at_100", "value": 1.0}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 26.333000000000002}, {"type": "precision_at_5", "value": 18.0}, {"type": "recall_at_1", "value": 63.0}, {"type": "recall_at_10", "value": 93.0}, {"type": "recall_at_100", "value": 100.0}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 79.0}, {"type": "recall_at_5", "value": 90.0}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB XPQARetrieval (fr)", "type": "jinaai/xpqa", "config": "fr", "split": "test", "revision": "c99d599f0a6ab9b85b065da6f9d94f9cf731679f"}, "metrics": [{"type": "map_at_1", "value": 40.338}, {"type": "map_at_10", "value": 61.927}, {"type": "map_at_100", "value": 63.361999999999995}, {"type": "map_at_1000", "value": 63.405}, {"type": "map_at_3", "value": 55.479}, {"type": "map_at_5", "value": 59.732}, {"type": "mrr_at_1", "value": 63.551}, {"type": "mrr_at_10", "value": 71.006}, {"type": "mrr_at_100", "value": 71.501}, {"type": "mrr_at_1000", "value": 71.509}, {"type": "mrr_at_3", "value": 69.07}, {"type": "mrr_at_5", "value": 70.165}, {"type": "ndcg_at_1", "value": 63.551}, {"type": "ndcg_at_10", "value": 68.297}, {"type": "ndcg_at_100", "value": 73.13199999999999}, {"type": "ndcg_at_1000", "value": 73.751}, {"type": "ndcg_at_3", "value": 62.999}, {"type": "ndcg_at_5", "value": 64.89}, {"type": "precision_at_1", "value": 63.551}, {"type": "precision_at_10", "value": 15.661}, {"type": "precision_at_100", "value": 1.9789999999999999}, {"type": "precision_at_1000", "value": 0.207}, {"type": "precision_at_3", "value": 38.273}, {"type": "precision_at_5", "value": 27.61}, {"type": "recall_at_1", "value": 40.338}, {"type": "recall_at_10", "value": 77.267}, {"type": "recall_at_100", "value": 95.892}, {"type": "recall_at_1000", "value": 99.75500000000001}, {"type": "recall_at_3", "value": 60.36}, {"type": "recall_at_5", "value": 68.825}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB 8TagsClustering", "type": "PL-MTEB/8tags-clustering", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "v_measure", "value": 51.36126303874126}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AllegroReviews", "type": "PL-MTEB/allegro-reviews", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 67.13717693836979}, {"type": "f1", "value": 57.27609848003782}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna-PL", "type": "clarin-knext/arguana-pl", "config": "default", "split": "test", "revision": "63fc86750af76253e8c760fc9e534bbf24d260a2"}, "metrics": [{"type": "map_at_1", "value": 35.276999999999994}, {"type": "map_at_10", "value": 51.086}, {"type": "map_at_100", "value": 51.788000000000004}, {"type": "map_at_1000", "value": 51.791}, {"type": "map_at_3", "value": 46.147}, {"type": "map_at_5", "value": 49.078}, {"type": "mrr_at_1", "value": 35.917}, {"type": "mrr_at_10", "value": 51.315999999999995}, {"type": "mrr_at_100", "value": 52.018}, {"type": "mrr_at_1000", "value": 52.022}, {"type": "mrr_at_3", "value": 46.349000000000004}, {"type": "mrr_at_5", "value": 49.297000000000004}, {"type": "ndcg_at_1", "value": 35.276999999999994}, {"type": "ndcg_at_10", "value": 59.870999999999995}, {"type": "ndcg_at_100", "value": 62.590999999999994}, {"type": "ndcg_at_1000", "value": 62.661}, {"type": "ndcg_at_3", "value": 49.745}, {"type": "ndcg_at_5", "value": 55.067}, {"type": "precision_at_1", "value": 35.276999999999994}, {"type": "precision_at_10", "value": 8.791}, {"type": "precision_at_100", "value": 0.991}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 20.057}, {"type": "precision_at_5", "value": 14.637}, {"type": "recall_at_1", "value": 35.276999999999994}, {"type": "recall_at_10", "value": 87.909}, {"type": "recall_at_100", "value": 99.14699999999999}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 60.171}, {"type": "recall_at_5", "value": 73.18599999999999}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB CBD", "type": "PL-MTEB/cbd", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 78.03000000000002}, {"type": "ap", "value": 29.12548553897622}, {"type": "f1", "value": 66.54857118886073}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB CDSC-E", "type": "PL-MTEB/cdsce-pairclassification", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.0}, {"type": "cos_sim_ap", "value": 76.75437826834582}, {"type": "cos_sim_f1", "value": 66.4850136239782}, {"type": "cos_sim_precision", "value": 68.92655367231639}, {"type": "cos_sim_recall", "value": 64.21052631578948}, {"type": "dot_accuracy", "value": 89.0}, {"type": "dot_ap", "value": 76.75437826834582}, {"type": "dot_f1", "value": 66.4850136239782}, {"type": "dot_precision", "value": 68.92655367231639}, {"type": "dot_recall", "value": 64.21052631578948}, {"type": "euclidean_accuracy", "value": 89.0}, {"type": "euclidean_ap", "value": 76.75437826834582}, {"type": "euclidean_f1", "value": 66.4850136239782}, {"type": "euclidean_precision", "value": 68.92655367231639}, {"type": "euclidean_recall", "value": 64.21052631578948}, {"type": "manhattan_accuracy", "value": 89.0}, {"type": "manhattan_ap", "value": 76.66074220647083}, {"type": "manhattan_f1", "value": 66.47058823529412}, {"type": "manhattan_precision", "value": 75.33333333333333}, {"type": "manhattan_recall", "value": 59.473684210526315}, {"type": "max_accuracy", "value": 89.0}, {"type": "max_ap", "value": 76.75437826834582}, {"type": "max_f1", "value": 66.4850136239782}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB CDSC-R", "type": "PL-MTEB/cdscr-sts", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_pearson", "value": 93.12903172428328}, {"type": "cos_sim_spearman", "value": 92.66381487060741}, {"type": "euclidean_pearson", "value": 90.37278396708922}, {"type": "euclidean_spearman", "value": 92.66381487060741}, {"type": "manhattan_pearson", "value": 90.32503296540962}, {"type": "manhattan_spearman", "value": 92.6902938354313}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia-PL", "type": "clarin-knext/dbpedia-pl", "config": "default", "split": "test", "revision": "76afe41d9af165cc40999fcaa92312b8b012064a"}, "metrics": [{"type": "map_at_1", "value": 8.83}, {"type": "map_at_10", "value": 18.326}, {"type": "map_at_100", "value": 26.496}, {"type": "map_at_1000", "value": 28.455000000000002}, {"type": "map_at_3", "value": 12.933}, {"type": "map_at_5", "value": 15.168000000000001}, {"type": "mrr_at_1", "value": 66.0}, {"type": "mrr_at_10", "value": 72.76700000000001}, {"type": "mrr_at_100", "value": 73.203}, {"type": "mrr_at_1000", "value": 73.219}, {"type": "mrr_at_3", "value": 71.458}, {"type": "mrr_at_5", "value": 72.246}, {"type": "ndcg_at_1", "value": 55.375}, {"type": "ndcg_at_10", "value": 41.3}, {"type": "ndcg_at_100", "value": 45.891}, {"type": "ndcg_at_1000", "value": 52.905}, {"type": "ndcg_at_3", "value": 46.472}, {"type": "ndcg_at_5", "value": 43.734}, {"type": "precision_at_1", "value": 66.0}, {"type": "precision_at_10", "value": 33.074999999999996}, {"type": "precision_at_100", "value": 11.094999999999999}, {"type": "precision_at_1000", "value": 2.374}, {"type": "precision_at_3", "value": 48.583}, {"type": "precision_at_5", "value": 42.0}, {"type": "recall_at_1", "value": 8.83}, {"type": "recall_at_10", "value": 22.587}, {"type": "recall_at_100", "value": 50.61600000000001}, {"type": "recall_at_1000", "value": 73.559}, {"type": "recall_at_3", "value": 13.688}, {"type": "recall_at_5", "value": 16.855}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA-PL", "type": "clarin-knext/fiqa-pl", "config": "default", "split": "test", "revision": "2e535829717f8bf9dc829b7f911cc5bbd4e6608e"}, "metrics": [{"type": "map_at_1", "value": 20.587}, {"type": "map_at_10", "value": 33.095}, {"type": "map_at_100", "value": 35.24}, {"type": "map_at_1000", "value": 35.429}, {"type": "map_at_3", "value": 28.626}, {"type": "map_at_5", "value": 31.136999999999997}, {"type": "mrr_at_1", "value": 40.586}, {"type": "mrr_at_10", "value": 49.033}, {"type": "mrr_at_100", "value": 49.952999999999996}, {"type": "mrr_at_1000", "value": 49.992}, {"type": "mrr_at_3", "value": 46.553}, {"type": "mrr_at_5", "value": 48.035}, {"type": "ndcg_at_1", "value": 40.586}, {"type": "ndcg_at_10", "value": 41.046}, {"type": "ndcg_at_100", "value": 48.586}, {"type": "ndcg_at_1000", "value": 51.634}, {"type": "ndcg_at_3", "value": 36.773}, {"type": "ndcg_at_5", "value": 38.389}, {"type": "precision_at_1", "value": 40.586}, {"type": "precision_at_10", "value": 11.466}, {"type": "precision_at_100", "value": 1.909}, {"type": "precision_at_1000", "value": 0.245}, {"type": "precision_at_3", "value": 24.434}, {"type": "precision_at_5", "value": 18.426000000000002}, {"type": "recall_at_1", "value": 20.587}, {"type": "recall_at_10", "value": 47.986000000000004}, {"type": "recall_at_100", "value": 75.761}, {"type": "recall_at_1000", "value": 94.065}, {"type": "recall_at_3", "value": 33.339}, {"type": "recall_at_5", "value": 39.765}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA-PL", "type": "clarin-knext/hotpotqa-pl", "config": "default", "split": "test", "revision": "a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907"}, "metrics": [{"type": "map_at_1", "value": 40.878}, {"type": "map_at_10", "value": 58.775999999999996}, {"type": "map_at_100", "value": 59.632}, {"type": "map_at_1000", "value": 59.707}, {"type": "map_at_3", "value": 56.074}, {"type": "map_at_5", "value": 57.629}, {"type": "mrr_at_1", "value": 81.756}, {"type": "mrr_at_10", "value": 86.117}, {"type": "mrr_at_100", "value": 86.299}, {"type": "mrr_at_1000", "value": 86.30600000000001}, {"type": "mrr_at_3", "value": 85.345}, {"type": "mrr_at_5", "value": 85.832}, {"type": "ndcg_at_1", "value": 81.756}, {"type": "ndcg_at_10", "value": 67.608}, {"type": "ndcg_at_100", "value": 70.575}, {"type": "ndcg_at_1000", "value": 71.99600000000001}, {"type": "ndcg_at_3", "value": 63.723}, {"type": "ndcg_at_5", "value": 65.70700000000001}, {"type": "precision_at_1", "value": 81.756}, {"type": "precision_at_10", "value": 13.619}, {"type": "precision_at_100", "value": 1.5939999999999999}, {"type": "precision_at_1000", "value": 0.178}, {"type": "precision_at_3", "value": 39.604}, {"type": "precision_at_5", "value": 25.332}, {"type": "recall_at_1", "value": 40.878}, {"type": "recall_at_10", "value": 68.096}, {"type": "recall_at_100", "value": 79.696}, {"type": "recall_at_1000", "value": 89.082}, {"type": "recall_at_3", "value": 59.406000000000006}, {"type": "recall_at_5", "value": 63.329}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO-PL", "type": "clarin-knext/msmarco-pl", "config": "default", "split": "test", "revision": "8634c07806d5cce3a6138e260e59b81760a0a640"}, "metrics": [{"type": "map_at_1", "value": 2.1839999999999997}, {"type": "map_at_10", "value": 11.346}, {"type": "map_at_100", "value": 30.325000000000003}, {"type": "map_at_1000", "value": 37.806}, {"type": "map_at_3", "value": 4.842}, {"type": "map_at_5", "value": 6.891}, {"type": "mrr_at_1", "value": 86.047}, {"type": "mrr_at_10", "value": 89.14699999999999}, {"type": "mrr_at_100", "value": 89.46600000000001}, {"type": "mrr_at_1000", "value": 89.46600000000001}, {"type": "mrr_at_3", "value": 89.14699999999999}, {"type": "mrr_at_5", "value": 89.14699999999999}, {"type": "ndcg_at_1", "value": 67.829}, {"type": "ndcg_at_10", "value": 62.222}, {"type": "ndcg_at_100", "value": 55.337}, {"type": "ndcg_at_1000", "value": 64.076}, {"type": "ndcg_at_3", "value": 68.12700000000001}, {"type": "ndcg_at_5", "value": 64.987}, {"type": "precision_at_1", "value": 86.047}, {"type": "precision_at_10", "value": 69.535}, {"type": "precision_at_100", "value": 32.93}, {"type": "precision_at_1000", "value": 6.6049999999999995}, {"type": "precision_at_3", "value": 79.845}, {"type": "precision_at_5", "value": 75.349}, {"type": "recall_at_1", "value": 2.1839999999999997}, {"type": "recall_at_10", "value": 12.866}, {"type": "recall_at_100", "value": 43.505}, {"type": "recall_at_1000", "value": 72.366}, {"type": "recall_at_3", "value": 4.947}, {"type": "recall_at_5", "value": 7.192}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pl)", "type": "mteb/amazon_massive_intent", "config": "pl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 80.75319435104238}, {"type": "f1", "value": 77.58961444860606}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pl)", "type": "mteb/amazon_massive_scenario", "config": "pl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 85.54472091459313}, {"type": "f1", "value": 84.29498563572106}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus-PL", "type": "clarin-knext/nfcorpus-pl", "config": "default", "split": "test", "revision": "9a6f9567fda928260afed2de480d79c98bf0bec0"}, "metrics": [{"type": "map_at_1", "value": 4.367}, {"type": "map_at_10", "value": 10.38}, {"type": "map_at_100", "value": 13.516}, {"type": "map_at_1000", "value": 14.982000000000001}, {"type": "map_at_3", "value": 7.367}, {"type": "map_at_5", "value": 8.59}, {"type": "mrr_at_1", "value": 41.486000000000004}, {"type": "mrr_at_10", "value": 48.886}, {"type": "mrr_at_100", "value": 49.657000000000004}, {"type": "mrr_at_1000", "value": 49.713}, {"type": "mrr_at_3", "value": 46.904}, {"type": "mrr_at_5", "value": 48.065000000000005}, {"type": "ndcg_at_1", "value": 40.402}, {"type": "ndcg_at_10", "value": 30.885}, {"type": "ndcg_at_100", "value": 28.393}, {"type": "ndcg_at_1000", "value": 37.428}, {"type": "ndcg_at_3", "value": 35.394999999999996}, {"type": "ndcg_at_5", "value": 33.391999999999996}, {"type": "precision_at_1", "value": 41.486000000000004}, {"type": "precision_at_10", "value": 23.437}, {"type": "precision_at_100", "value": 7.638}, {"type": "precision_at_1000", "value": 2.0389999999999997}, {"type": "precision_at_3", "value": 32.817}, {"type": "precision_at_5", "value": 28.915999999999997}, {"type": "recall_at_1", "value": 4.367}, {"type": "recall_at_10", "value": 14.655000000000001}, {"type": "recall_at_100", "value": 29.665999999999997}, {"type": "recall_at_1000", "value": 62.073}, {"type": "recall_at_3", "value": 8.51}, {"type": "recall_at_5", "value": 10.689}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ-PL", "type": "clarin-knext/nq-pl", "config": "default", "split": "test", "revision": "f171245712cf85dd4700b06bef18001578d0ca8d"}, "metrics": [{"type": "map_at_1", "value": 28.616000000000003}, {"type": "map_at_10", "value": 41.626000000000005}, {"type": "map_at_100", "value": 42.689}, {"type": "map_at_1000", "value": 42.733}, {"type": "map_at_3", "value": 37.729}, {"type": "map_at_5", "value": 39.879999999999995}, {"type": "mrr_at_1", "value": 32.068000000000005}, {"type": "mrr_at_10", "value": 44.029}, {"type": "mrr_at_100", "value": 44.87}, {"type": "mrr_at_1000", "value": 44.901}, {"type": "mrr_at_3", "value": 40.687}, {"type": "mrr_at_5", "value": 42.625}, {"type": "ndcg_at_1", "value": 32.068000000000005}, {"type": "ndcg_at_10", "value": 48.449999999999996}, {"type": "ndcg_at_100", "value": 53.13}, {"type": "ndcg_at_1000", "value": 54.186}, {"type": "ndcg_at_3", "value": 40.983999999999995}, {"type": "ndcg_at_5", "value": 44.628}, {"type": "precision_at_1", "value": 32.068000000000005}, {"type": "precision_at_10", "value": 7.9750000000000005}, {"type": "precision_at_100", "value": 1.061}, {"type": "precision_at_1000", "value": 0.116}, {"type": "precision_at_3", "value": 18.404999999999998}, {"type": "precision_at_5", "value": 13.111}, {"type": "recall_at_1", "value": 28.616000000000003}, {"type": "recall_at_10", "value": 66.956}, {"type": "recall_at_100", "value": 87.657}, {"type": "recall_at_1000", "value": 95.548}, {"type": "recall_at_3", "value": 47.453}, {"type": "recall_at_5", "value": 55.87800000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB PAC", "type": "laugustyniak/abusive-clauses-pl", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 69.04141326382856}, {"type": "ap", "value": 77.47589122111044}, {"type": "f1", "value": 66.6332277374775}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB PPC", "type": "PL-MTEB/ppc-pairclassification", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.4}, {"type": "cos_sim_ap", "value": 94.1044939667201}, {"type": "cos_sim_f1", "value": 88.78048780487805}, {"type": "cos_sim_precision", "value": 87.22044728434504}, {"type": "cos_sim_recall", "value": 90.39735099337747}, {"type": "dot_accuracy", "value": 86.4}, {"type": "dot_ap", "value": 94.1044939667201}, {"type": "dot_f1", "value": 88.78048780487805}, {"type": "dot_precision", "value": 87.22044728434504}, {"type": "dot_recall", "value": 90.39735099337747}, {"type": "euclidean_accuracy", "value": 86.4}, {"type": "euclidean_ap", "value": 94.1044939667201}, {"type": "euclidean_f1", "value": 88.78048780487805}, {"type": "euclidean_precision", "value": 87.22044728434504}, {"type": "euclidean_recall", "value": 90.39735099337747}, {"type": "manhattan_accuracy", "value": 86.4}, {"type": "manhattan_ap", "value": 94.11438365697387}, {"type": "manhattan_f1", "value": 88.77968877968877}, {"type": "manhattan_precision", "value": 87.84440842787681}, {"type": "manhattan_recall", "value": 89.73509933774835}, {"type": "max_accuracy", "value": 86.4}, {"type": "max_ap", "value": 94.11438365697387}, {"type": "max_f1", "value": 88.78048780487805}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB PSC", "type": "PL-MTEB/psc-pairclassification", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_accuracy", "value": 97.86641929499072}, {"type": "cos_sim_ap", "value": 99.36904211868182}, {"type": "cos_sim_f1", "value": 96.56203288490283}, {"type": "cos_sim_precision", "value": 94.72140762463343}, {"type": "cos_sim_recall", "value": 98.47560975609755}, {"type": "dot_accuracy", "value": 97.86641929499072}, {"type": "dot_ap", "value": 99.36904211868183}, {"type": "dot_f1", "value": 96.56203288490283}, {"type": "dot_precision", "value": 94.72140762463343}, {"type": "dot_recall", "value": 98.47560975609755}, {"type": "euclidean_accuracy", "value": 97.86641929499072}, {"type": "euclidean_ap", "value": 99.36904211868183}, {"type": "euclidean_f1", "value": 96.56203288490283}, {"type": "euclidean_precision", "value": 94.72140762463343}, {"type": "euclidean_recall", "value": 98.47560975609755}, {"type": "manhattan_accuracy", "value": 98.14471243042672}, {"type": "manhattan_ap", "value": 99.43359540492416}, {"type": "manhattan_f1", "value": 96.98795180722892}, {"type": "manhattan_precision", "value": 95.83333333333334}, {"type": "manhattan_recall", "value": 98.17073170731707}, {"type": "max_accuracy", "value": 98.14471243042672}, {"type": "max_ap", "value": 99.43359540492416}, {"type": "max_f1", "value": 96.98795180722892}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB PolEmo2.0-IN", "type": "PL-MTEB/polemo2_in", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 89.39058171745152}, {"type": "f1", "value": 86.8552093529568}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB PolEmo2.0-OUT", "type": "PL-MTEB/polemo2_out", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 74.97975708502024}, {"type": "f1", "value": 58.73081628832407}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Quora-PL", "type": "clarin-knext/quora-pl", "config": "default", "split": "test", "revision": "0be27e93455051e531182b85e85e425aba12e9d4"}, "metrics": [{"type": "map_at_1", "value": 64.917}, {"type": "map_at_10", "value": 78.74600000000001}, {"type": "map_at_100", "value": 79.501}, {"type": "map_at_1000", "value": 79.524}, {"type": "map_at_3", "value": 75.549}, {"type": "map_at_5", "value": 77.495}, {"type": "mrr_at_1", "value": 74.9}, {"type": "mrr_at_10", "value": 82.112}, {"type": "mrr_at_100", "value": 82.314}, {"type": "mrr_at_1000", "value": 82.317}, {"type": "mrr_at_3", "value": 80.745}, {"type": "mrr_at_5", "value": 81.607}, {"type": "ndcg_at_1", "value": 74.83999999999999}, {"type": "ndcg_at_10", "value": 83.214}, {"type": "ndcg_at_100", "value": 84.997}, {"type": "ndcg_at_1000", "value": 85.207}, {"type": "ndcg_at_3", "value": 79.547}, {"type": "ndcg_at_5", "value": 81.46600000000001}, {"type": "precision_at_1", "value": 74.83999999999999}, {"type": "precision_at_10", "value": 12.822}, {"type": "precision_at_100", "value": 1.506}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 34.903}, {"type": "precision_at_5", "value": 23.16}, {"type": "recall_at_1", "value": 64.917}, {"type": "recall_at_10", "value": 92.27199999999999}, {"type": "recall_at_100", "value": 98.715}, {"type": "recall_at_1000", "value": 99.854}, {"type": "recall_at_3", "value": 82.04599999999999}, {"type": "recall_at_5", "value": 87.2}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS-PL", "type": "clarin-knext/scidocs-pl", "config": "default", "split": "test", "revision": "45452b03f05560207ef19149545f168e596c9337"}, "metrics": [{"type": "map_at_1", "value": 3.51}, {"type": "map_at_10", "value": 9.046999999999999}, {"type": "map_at_100", "value": 10.823}, {"type": "map_at_1000", "value": 11.144}, {"type": "map_at_3", "value": 6.257}, {"type": "map_at_5", "value": 7.648000000000001}, {"type": "mrr_at_1", "value": 17.299999999999997}, {"type": "mrr_at_10", "value": 27.419}, {"type": "mrr_at_100", "value": 28.618}, {"type": "mrr_at_1000", "value": 28.685}, {"type": "mrr_at_3", "value": 23.817}, {"type": "mrr_at_5", "value": 25.927}, {"type": "ndcg_at_1", "value": 17.299999999999997}, {"type": "ndcg_at_10", "value": 16.084}, {"type": "ndcg_at_100", "value": 23.729}, {"type": "ndcg_at_1000", "value": 29.476999999999997}, {"type": "ndcg_at_3", "value": 14.327000000000002}, {"type": "ndcg_at_5", "value": 13.017999999999999}, {"type": "precision_at_1", "value": 17.299999999999997}, {"type": "precision_at_10", "value": 8.63}, {"type": "precision_at_100", "value": 1.981}, {"type": "precision_at_1000", "value": 0.336}, {"type": "precision_at_3", "value": 13.4}, {"type": "precision_at_5", "value": 11.700000000000001}, {"type": "recall_at_1", "value": 3.51}, {"type": "recall_at_10", "value": 17.518}, {"type": "recall_at_100", "value": 40.275}, {"type": "recall_at_1000", "value": 68.203}, {"type": "recall_at_3", "value": 8.155}, {"type": "recall_at_5", "value": 11.875}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SICK-E-PL", "type": "PL-MTEB/sicke-pl-pairclassification", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.30248675091724}, {"type": "cos_sim_ap", "value": 83.6756734006714}, {"type": "cos_sim_f1", "value": 74.97367497367497}, {"type": "cos_sim_precision", "value": 73.91003460207612}, {"type": "cos_sim_recall", "value": 76.06837606837607}, {"type": "dot_accuracy", "value": 86.30248675091724}, {"type": "dot_ap", "value": 83.6756734006714}, {"type": "dot_f1", "value": 74.97367497367497}, {"type": "dot_precision", "value": 73.91003460207612}, {"type": "dot_recall", "value": 76.06837606837607}, {"type": "euclidean_accuracy", "value": 86.30248675091724}, {"type": "euclidean_ap", "value": 83.67566984333091}, {"type": "euclidean_f1", "value": 74.97367497367497}, {"type": "euclidean_precision", "value": 73.91003460207612}, {"type": "euclidean_recall", "value": 76.06837606837607}, {"type": "manhattan_accuracy", "value": 86.28210354667753}, {"type": "manhattan_ap", "value": 83.64216119130171}, {"type": "manhattan_f1", "value": 74.92152075340078}, {"type": "manhattan_precision", "value": 73.4107997265892}, {"type": "manhattan_recall", "value": 76.49572649572649}, {"type": "max_accuracy", "value": 86.30248675091724}, {"type": "max_ap", "value": 83.6756734006714}, {"type": "max_f1", "value": 74.97367497367497}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R-PL", "type": "PL-MTEB/sickr-pl-sts", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.23295940859121}, {"type": "cos_sim_spearman", "value": 78.89329160768719}, {"type": "euclidean_pearson", "value": 79.56019107076818}, {"type": "euclidean_spearman", "value": 78.89330209904084}, {"type": "manhattan_pearson", "value": 79.76098513973719}, {"type": "manhattan_spearman", "value": 79.05490162570123}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl)", "type": "mteb/sts22-crosslingual-sts", "config": "pl", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 37.732606308062486}, {"type": "cos_sim_spearman", "value": 41.01645667030284}, {"type": "euclidean_pearson", "value": 26.61722556367085}, {"type": "euclidean_spearman", "value": 41.01645667030284}, {"type": "manhattan_pearson", "value": 26.60917378970807}, {"type": "manhattan_spearman", "value": 41.51335727617614}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact-PL", "type": "clarin-knext/scifact-pl", "config": "default", "split": "test", "revision": "47932a35f045ef8ed01ba82bf9ff67f6e109207e"}, "metrics": [{"type": "map_at_1", "value": 54.31700000000001}, {"type": "map_at_10", "value": 65.564}, {"type": "map_at_100", "value": 66.062}, {"type": "map_at_1000", "value": 66.08699999999999}, {"type": "map_at_3", "value": 62.592999999999996}, {"type": "map_at_5", "value": 63.888}, {"type": "mrr_at_1", "value": 56.99999999999999}, {"type": "mrr_at_10", "value": 66.412}, {"type": "mrr_at_100", "value": 66.85900000000001}, {"type": "mrr_at_1000", "value": 66.88}, {"type": "mrr_at_3", "value": 64.22200000000001}, {"type": "mrr_at_5", "value": 65.206}, {"type": "ndcg_at_1", "value": 56.99999999999999}, {"type": "ndcg_at_10", "value": 70.577}, {"type": "ndcg_at_100", "value": 72.879}, {"type": "ndcg_at_1000", "value": 73.45}, {"type": "ndcg_at_3", "value": 65.5}, {"type": "ndcg_at_5", "value": 67.278}, {"type": "precision_at_1", "value": 56.99999999999999}, {"type": "precision_at_10", "value": 9.667}, {"type": "precision_at_100", "value": 1.083}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 26.0}, {"type": "precision_at_5", "value": 16.933}, {"type": "recall_at_1", "value": 54.31700000000001}, {"type": "recall_at_10", "value": 85.056}, {"type": "recall_at_100", "value": 95.667}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 71.0}, {"type": "recall_at_5", "value": 75.672}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID-PL", "type": "clarin-knext/trec-covid-pl", "config": "default", "split": "test", "revision": "81bcb408f33366c2a20ac54adafad1ae7e877fdd"}, "metrics": [{"type": "map_at_1", "value": 0.245}, {"type": "map_at_10", "value": 2.051}, {"type": "map_at_100", "value": 12.009}, {"type": "map_at_1000", "value": 27.448}, {"type": "map_at_3", "value": 0.721}, {"type": "map_at_5", "value": 1.13}, {"type": "mrr_at_1", "value": 88.0}, {"type": "mrr_at_10", "value": 93.0}, {"type": "mrr_at_100", "value": 93.0}, {"type": "mrr_at_1000", "value": 93.0}, {"type": "mrr_at_3", "value": 93.0}, {"type": "mrr_at_5", "value": 93.0}, {"type": "ndcg_at_1", "value": 85.0}, {"type": "ndcg_at_10", "value": 80.303}, {"type": "ndcg_at_100", "value": 61.23499999999999}, {"type": "ndcg_at_1000", "value": 52.978}, {"type": "ndcg_at_3", "value": 84.419}, {"type": "ndcg_at_5", "value": 82.976}, {"type": "precision_at_1", "value": 88.0}, {"type": "precision_at_10", "value": 83.39999999999999}, {"type": "precision_at_100", "value": 61.96}, {"type": "precision_at_1000", "value": 22.648}, {"type": "precision_at_3", "value": 89.333}, {"type": "precision_at_5", "value": 87.2}, {"type": "recall_at_1", "value": 0.245}, {"type": "recall_at_10", "value": 2.193}, {"type": "recall_at_100", "value": 14.938}, {"type": "recall_at_1000", "value": 48.563}, {"type": "recall_at_3", "value": 0.738}, {"type": "recall_at_5", "value": 1.173}]}]}]}
dataset
null
521
invisietch/EtherealRainbow-v0.3-8B-GGUF
invisietch
null
[ "transformers", "gguf", "mergekit", "merge", "not-for-all-audiences", "en", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
2024-06-18T16:54:45Z
2024-09-19T12:12:07+00:00
524
4
--- language: - en library_name: transformers license: llama3 tags: - mergekit - merge - not-for-all-audiences --- <div align="center"> <b style="font-size: 36px;">EtherealRainbow-v0.3-8B</b> <img src="https://huggingface.co/invisietch/EtherealRainbow-v0.2-8B/resolve/main/ethrheader.png" style="width:60%"> </div> # Model Details Ethereal Rainbow is an 8B parameter merge of various Llama3-based finetunes created using mergekit. The purpose of Ethereal Rainbow is to create an uncensored Llama3 variant which is capable of writing creative prose, and engaging in SFW as well as NSFW roleplay and storytelling, with a strong focus on long-form responses &amp; adherence to prompts. v0.3 improves creativity over v0.2 without losing coherence. It has been tested over more than 1,000 messages including roleplay, code prompts, and 'write a scene'-type prompts. # Quantization Formats * [FP16 Safetensors](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B) * [GGUF 4/6/8/16bpw](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B-GGUF) * [6.5bpw EXL2](https://huggingface.co/Meggido/EtherealRainbow-v0.3-8B-6.5bpw-h8-exl2) (shoutout to [Meggido](https://huggingface.co/Meggido)) * [Alternative GGUFs](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF) (shoutout to [mradermacher](https://huggingface.co/mradermacher)) * [iMatrix i1 GGUFs](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-i1-GGUF) (shoutout to [mradermacher](https://huggingface.co/mradermacher)) # Disclaimer This model is built on an abliterated base and as such is largely uncensored. It can generate explicit, disturbing or offensive responses. Use responsibly. I am not responsible for your use of this model. # Prompting Format I'd recommend Llama-3 Instruct prompting format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` Some of the models included in the merge were trained on ChatML &amp; Alpaca so you can try those. I have not tested them. # Example Storywriting These prompts are used on SillyTavern with a fairly basic narrator card. I have trimmed the start and finish where the narrator decided to add chapter headings, commentary and the like. All samples are made with the F32 GGUF loaded with koboldcpp, with response length capped at 2048 tokens. <details> <summary><b>Write me a 3,000 word opening chapter of a 'gritty hard sci-fi' novel, drawing inspiration from the writing styles of Isaac Asimov & Andy Weir. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 26 year old astronaut called Tone on a mission to Europa, who has just realised that the craft for the return journey is broken beyond repair, and he only has supplies for a few months. Given that survival is impossible, he seeks to spend the few months he has researching titan, so his life & mission are not wasted.</b></summary> Tone felt his heart sink as the confirmation came through his earpiece - the ship was lost. For a moment, he froze in stunned disbelief, then launched into frantic action. "Alright, everyone, we've got a serious situation here," he radioed, trying to sound calm despite his panic rising. "It looks like our ride home has bitten the dust." His fellow astronauts were equally shocked, their voices a flurry of panicked queries. "What happened?" "How could this happen?" "We're stranded?" "How long till rescue?" "We'll know more once I check the extent of the damage," he assured them, his voice steady despite the cold dread creeping up his spine. He strapped on a harness and floated out of the control room towards the ravaged engineering compartment. As he drifted closer, the scope of the destruction became horrifyingly clear - a jagged hole gaped in the hull, spewing a chilling rush of icy gas directly into space. It was obvious there'd be no fixing it, not without a fabrication plant and raw materials - resources they didn't carry on a probe mission to a distant moon. "Okay, let me give you the bad news - we're not going anywhere soon." He winced at how stark and final that sounded. They had enough provisions to last six months at a conservative rate... assuming no unforeseen problems cropped up, which they undoubtedly would. At the very least, it would mean delaying their research significantly - potentially even scrubbing it altogether. But given that survival was already impossible, it might as well serve some greater purpose. His mind racing, he radioed back to base with the grim news. Afterward, he retreated to the lab to review their findings. Tone's specialty lay in geology; studying Jupiter's icy satellite was his passion. He'd spent countless hours poring over the data they'd gathered, dreaming of the breakthroughs they'd make here. Now, with their mission ended in tragedy, he wondered what significance their work still held. As he flipped through images of towering cliffs carved from frozen methane, his thoughts veered toward the massive, unexplored Titan - Saturn's moon. The possibility of exploring another celestial body while trapped on Europa was tantalizing... dangerous, but enticing nonetheless. "Hey, guys, meet me in the conference room," he broadcast. Minutes later, he faced his team around the holographic table, his gaze lingering on Dr. Singh, the soft-spoken astrophysicist who'd grown close during their time together. "Listen, I know this is bleak, but let's not write off our entire expedition yet," he began. "Europa is just one moon; there's plenty left to discover in this system. What say we redirect our focus to Titan?" "I'm listening," said Singh quietly, her dark eyes searching his. She'd been the most skeptical of their initial proposal to study Europa - its ice crust concealing a hidden ocean made her think exploration too hazardous. But she'd come around once they'd started analyzing samples from beneath the surface. Perhaps this unexpected detour could change her views again? Kirk, their pilot, spoke first. "I'm down. Not like we have much choice, right? Though, uh, what makes you think we can actually get there? Our shuttle's toast." "Amen to that," agreed Rodriguez, their biologist, her tone dry. She'd always seemed slightly annoyed to be part of the geological expedition instead of a solo biology probe focused on Europa. "I'm not sure how yet," Tone admitted, "but there's got to be a way. Maybe the Europa probe's thruster packs are salvageable? We could rig something together..." He glanced around the table, gauging reactions. Singh remained expressionless, while Kirk leaned back in his chair with an air of cautious curiosity, and Rodriguez seemed open but unconvinced. Time to dig deeper. "Think about it - we've already collected loads of valuable data on Europa's composition. Imagine the discoveries waiting on Titan, a moon larger than Mercury!" Rodriguez snorted. "With what, exactly? We can barely fix this wreck, let alone build new craft to travel thousands of miles..." "Well, we don't need anything fancy. A makeshift probe, essentially," he countered, undeterred. "Maybe the damaged cargo modules will yield something useful? We'll find a way, guys - it beats drifting around Europa forever with nothing to show for it." Singh's eyes narrowed thoughtfully, Kirk rubbed his chin, and even Rodriguez seemed intrigued despite herself. "What's the worst that could happen?" Tone pressed. "We run out of power and freeze to death on a rock floating through space?" There was a brief pause before she muttered, "You know that's not the worst-case scenario." The others shared a grim chuckle at the biologist's macabre humor, easing the tense atmosphere a bit. Tone seized the moment to push further. "I'll start gathering schematics, see what we can scrounge. Who's with me on this wild goose chase?" Around the table, nods and murmurs of agreement arose. With a quiet sense of excitement mixed with dread, Tone set to work. His thoughts kept returning to Titan's enigmatic landscapes, alien seas and thick, murky atmosphere - and the tantalizing prospect of being the first human to explore it, no matter the risks involved. He hoped the others would share his drive to make the most of their doomed voyage, rather than succumbing to despair... because for better or worse, they were in this together now. </details> <details> <summary><b>Write me a 3,000 word opening chapter of a 'high fantasy' novel, drawing inspiration from the writing styles of J R R Tolkien & George R R Martin. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 19 year old female elf bard who is looking for adventure.</b></summary> Arius guided his horse closer to hers, his voice low and urgent. "Keep your wits about you, Lila. This doesn't feel right. Stay close." He placed a hand on her thigh, a reassuring pressure that sent a thrill through her. For a moment, their eyes locked, the connection sparking with electricity. They pressed onward, the tension growing with every passing minute. Then, without warning, an arrow streaked from the treetops, striking a soldier in the shoulder. Chaos erupted as the group scattered, drawing swords and bows. Lila's heart hammered in her chest as she swung down from Starlight's back, lute in hand. "Spirits above!" she gasped, scrambling behind a nearby bush. Arrows flew in rapid succession, finding their marks among the panicked crowd. The air reeked of sweat, fear, and ozone. Lila risked a peek above the foliage, her breath catching in her throat. Dozens of dark, humanoid figures emerged from the underbrush, their skin pale and deathly cold, their eyes glowing with an ethereal green fire. They wielded crude bows and clubs, their movements jerky and unnatural. Goblins, surely, though none she'd ever seen before. With a battle cry, Lila launched into action, sprinting between the trees to strike from the flanks. Her lute served as an impromptu shield, deflecting a club blow from one of the twisted creatures. She landed a quick kick to its groin, then struck its knees, toppling it to the ground. As she spun to face another assailant, a pang of hunger seized her stomach – an unnatural craving unlike anything she'd experienced before. These weren't just ordinary goblins... The battle raged on, the party slowly gaining the upper hand despite their numbers disadvantage. Lila fought with savage ferocity, her skills honed by countless tavern brawls and adventures across Eridoria. Yet even as she felled foe after foe, the insatiable hunger gnawed at her insides, a constant reminder of the eldritch forces at play. When the last goblin lay motionless at her feet, Lila sank to her knees, gasping for air. Blood trickled down her arm where a shallow gash had opened during the chaos. Arius appeared at her side, his expression a mixture of concern and relief. "You're alright?" he asked, helping her rise. "By the gods, that was brutal..." "I'm fine, Captain," Lila replied, wiping the sweat from her brow. But her voice lacked conviction. "Though I've never faced such abominations before." Arius eyed her warily. "Nor have I, Lady Silvermist. Whatever those were, they're no mere goblins." He scanned the clearing, taking stock of the bodies littering the forest floor. "We need to get you treated for that cut. And gather the wounded – we'll need to tend them before we continue on our original mission." Lila nodded, though her mind lingered on the unnatural hunger, the overwhelming urge to feast on raw flesh that had coursed through her veins during combat. She clutched her lute tighter, the familiar weight comforting in her hands. Whatever darkness lurked in these woods, she intended to unravel its secrets, no matter the cost. And so they rode on, the once-sleepy town of Eldrador left far behind as they delved deeper into the heart of the Whispering Woods, seeking answers to mysteries that would shake the very foundations of their world. </details> <details> <summary><b>Write me a 3,000 word opening chapter of a 'weird fiction' novel, drawing inspiration from the writing styles of China Mieville and Neil Gaiman. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a male in his 20s called Horton who has just come to the city looking for work.</b></summary> HORTON STOOD ON THE STREET CORNER, surveying the sprawling cityscape with a mix of excitement and trepidation. At twenty-six years old, he had finally mustered the courage to leave behind his hometown of Willowbrook, hoping to find something more meaningful in the big city. But so far, all he'd found was uncertainty, loneliness, and a growing unease about the shadows that seemed to follow him everywhere. As he gazed up at the towering skyscrapers, the air hummed with an ominous energy, like the vibrations of a bass line resonating through the pavement. Neon signs flickered and pulsed, casting garish light over the streets while a faint, acrid scent hung in the air – a bitter tang of smoke and ozone that made Horton wrinkle his nose. A gust of wind swept through the crowded sidewalks, sending trash tumbling and umbrellas flapping like crippled birds. Horton's dark brown coat whipped around him, exposing the worn fabric of his trousers beneath. Shaking his head, he tightened his grip on his battered suitcase and looked down at his scuffed boots, wondering how long he could survive here without work. The meager funds left after the bus ride wouldn't last much longer. "Hey, you lost?" A raspy voice broke through the din, and Horton glanced up to see a street performer sitting on the edge of the sidewalk, a beat-up guitar across his knees. The man had sunken cheeks, piercing green eyes, and a shock of wild silver hair. A faded fedora obscured most of his face. "No, not lost," Horton replied, attempting a reassuring smile despite the tightness in his chest. "Just trying to get my bearings." The musician strummed a discordant chord, his fingers dancing deftly along the strings. "City ain't kind to lost souls, friend. Best get clearheaded quick if you want to make it." Horton nodded, though his uncertainty remained palpable. As he scanned the crowds once more, a movement in the corner of his eye caught his attention. A flash of crimson – not a pedestrian, not a billboard... but something smaller, darting between buildings. His heart rate picked up as the crimson blur reappeared further down the street. It moved with preternatural speed, a whirling dervish of motion. Horton took off in pursuit, his suitcase thudding against his leg with every step. When he rounded a corner, the figure vanished, leaving only a lingering echo of rustling fabric. Panting, Horton pressed onward, weaving between pedestrians who barely acknowledged him. He spotted another crimson glimpse in a side alleyway, but as he approached, a chill crept up his spine. The space was too quiet, too empty – no sign of the elusive creature. Just cobwebs clinging to the brick walls and the distant clatter of the subway trains. He retreated into the main thoroughfare, pulse still racing, and collided with a woman. Apologetic murmurs turned to sharp intakes of breath as she stared up at him, her irises glowing an unearthly emerald. "Oh, hello there!" Her melodic voice seemed to burble like a babbling brook. "Lost again, handsome?" Horton blinked, his hand instinctively reaching out to steady her slender form. "N-no, I was just—" She smiled, her canines glinting like tiny jewels. "You were chasing something. Am I right?" He hesitated, unsure how much to reveal. "Maybe. It was small... red, like a..." "Ah, the city's newest residents," she finished, her expression unreadable. "I suppose we can help with that, for a price." "Price?" Horton repeated warily. "A job, perhaps? We're always looking for eager young souls." She tilted her head, studying him with unnerving intensity. "Or maybe just some companionship. My friends would love to meet you." Companionship. Horton shivered, the idea making his skin crawl despite the warmth of the summer evening. There was something off about this woman – something predatory lurking beneath the sweet facade. "I'm Horton," he managed, extending a hand awkwardly. "Gladys." She clasped his fingers in hers, her touch searingly warm. "We'll catch up soon, dear." As Gladys melted back into the crowd, Horton rubbed his chilled palm and pondered whether to follow her. Something about those ethereal eyes made his instincts scream danger. Still, the allure of employment, any employment, was hard to ignore. He sighed and resumed his search, trying to shake the unsettling feeling that the city watched his every move, waiting to claim its newest pawn... </details> I chose the hard sci-fi example to test positivity bias. It did require some prompting, but it was willing to kill the protagonist. I chose the high fantasy example to see whether it would bleed human features through to elves, this didn't occur. I chose the weird fiction example to see if the LLM understood a niche genre. I'd say it performed okay, better on style than on substance. # Merge Strategy First, we create three bases: * Rain - This is a roleplay base which makes up the majority of the model. * Sun - This is the brains of the model, with strong instruct models & writing models. * Ghost - This model primarily aims to improve the NSFW/NSFL aspects of the model, as well as general vocabulary. After this, we have a two-slerp stage to create the final model. ## Models Used The following models were used to create EtherealRainbow-v0.3-8B: * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [Nitral-AI/Hathor-L3-8B-v.02](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.02) * [grimjim/Llama-3-Luminurse-v0.2-OAS-8B](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B) * [hf-100/Llama-3-Spellbound-Instruct-8B-0.3](https://huggingface.co/hf-100/Llama-3-Spellbound-Instruct-8B-0.3) * [Gryphe/Pantheon-RP-1.0-8b-Llama-3](https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3) * [Blackroot/Llama-3-LongStory](https://huggingface.co/Blackroot/Llama-3-LongStory) * [Locutusque/Llama-3-Hercules-5.0-8B](https://huggingface.co/Locutusque/Llama-3-Hercules-5.0-8B) * [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B) * [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B) * [mpasila/Llama-3-LimaRP-Instruct-8B](mpasila/Llama-3-LimaRP-Instruct-8B) * [Undi95/Llama-3-LewdPlay-8B-evo](Undi95/Llama-3-LewdPlay-8B-evo) ## Mergekit Configs ### Rain ```yaml models: - model: mlabonne/NeuralDaredevil-8B-abliterated - model: Sao10K/L3-8B-Stheno-v3.2 parameters: density: 0.41 weight: 0.4 - model: Nitral-AI/Hathor-L3-8B-v.02 parameters: density: 0.53 weight: 0.5 - model: grimjim/Llama-3-Luminurse-v0.2-OAS-8B parameters: density: 0.45 weight: 0.1 merge_method: dare_ties base_model: mlabonne/NeuralDaredevil-8B-abliterated parameters: int8_mask: true dtype: bfloat16 ``` ### Sun ```yaml models: - model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3 - model: Gryphe/Pantheon-RP-1.0-8b-Llama-3 parameters: density: 0.48 weight: 0.5 - model: Blackroot/Llama-3-LongStory parameters: density: 0.36 weight: 0.2 - model: Locutusque/Llama-3-Hercules-5.0-8B parameters: density: 0.51 weight: 0.3 merge_method: dare_ties base_model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3 parameters: int8_mask: true dtype: bfloat16 ``` ### Ghost ```yaml models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B parameters: density: 0.39 weight: 0.3 - model: mpasila/Llama-3-LimaRP-Instruct-8B parameters: density: 0.54 weight: 0.4 - model: Undi95/Llama-3-LewdPlay-8B-evo parameters: density: 0.49 weight: 0.3 merge_method: dare_ties base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B parameters: int8_mask: true dtype: bfloat16 ``` ### Stage1 Slerp ```yaml models: - model: ./fp16/Rain-v0.3-8B - model: ./fp16/Ghost-v0.3-8B merge_method: slerp base_model: ./fp16/Rain-v0.3-8B parameters: t: - value: [0, 0, 0.1, 0.3, 0.5, 0.7, 0.5, 0.3, 0.1, 0, 0] embed_slerp: true dtype: bfloat16 tokenizer-source: model:./fp16/Rain-v0.3-8B ``` ### Final-Stage Slerp ```yaml models: - model: ./fp16/ERStage1-v0.3-8B - model: ./fp16/Sun-v0.3-8B merge_method: slerp base_model: ./fp16/ERStage1-v0.3-8B parameters: t: - value: [0, 0, 0.1, 0.2, 0.4, 0.6, 0.4, 0.2, 0.1, 0, 0] embed_slerp: true dtype: bfloat16 tokenizer-source: model:./fp16/ERStage1-v0.3-8B ```
[ "CRAFT" ]
Non_BioNLP
<div align="center"> <b style="font-size: 36px;">EtherealRainbow-v0.3-8B</b> <img src="https://huggingface.co/invisietch/EtherealRainbow-v0.2-8B/resolve/main/ethrheader.png" style="width:60%"> </div> # Model Details Ethereal Rainbow is an 8B parameter merge of various Llama3-based finetunes created using mergekit. The purpose of Ethereal Rainbow is to create an uncensored Llama3 variant which is capable of writing creative prose, and engaging in SFW as well as NSFW roleplay and storytelling, with a strong focus on long-form responses &amp; adherence to prompts. v0.3 improves creativity over v0.2 without losing coherence. It has been tested over more than 1,000 messages including roleplay, code prompts, and 'write a scene'-type prompts. # Quantization Formats * [FP16 Safetensors](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B) * [GGUF 4/6/8/16bpw](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B-GGUF) * [6.5bpw EXL2](https://huggingface.co/Meggido/EtherealRainbow-v0.3-8B-6.5bpw-h8-exl2) (shoutout to [Meggido](https://huggingface.co/Meggido)) * [Alternative GGUFs](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF) (shoutout to [mradermacher](https://huggingface.co/mradermacher)) * [iMatrix i1 GGUFs](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-i1-GGUF) (shoutout to [mradermacher](https://huggingface.co/mradermacher)) # Disclaimer This model is built on an abliterated base and as such is largely uncensored. It can generate explicit, disturbing or offensive responses. Use responsibly. I am not responsible for your use of this model. # Prompting Format I'd recommend Llama-3 Instruct prompting format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` Some of the models included in the merge were trained on ChatML &amp; Alpaca so you can try those. I have not tested them. # Example Storywriting These prompts are used on SillyTavern with a fairly basic narrator card. I have trimmed the start and finish where the narrator decided to add chapter headings, commentary and the like. All samples are made with the F32 GGUF loaded with koboldcpp, with response length capped at 2048 tokens. <details> <summary><b>Write me a 3,000 word opening chapter of a 'gritty hard sci-fi' novel, drawing inspiration from the writing styles of Isaac Asimov & Andy Weir. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 26 year old astronaut called Tone on a mission to Europa, who has just realised that the craft for the return journey is broken beyond repair, and he only has supplies for a few months. Given that survival is impossible, he seeks to spend the few months he has researching titan, so his life & mission are not wasted.</b></summary> Tone felt his heart sink as the confirmation came through his earpiece - the ship was lost. For a moment, he froze in stunned disbelief, then launched into frantic action. "Alright, everyone, we've got a serious situation here," he radioed, trying to sound calm despite his panic rising. "It looks like our ride home has bitten the dust." His fellow astronauts were equally shocked, their voices a flurry of panicked queries. "What happened?" "How could this happen?" "We're stranded?" "How long till rescue?" "We'll know more once I check the extent of the damage," he assured them, his voice steady despite the cold dread creeping up his spine. He strapped on a harness and floated out of the control room towards the ravaged engineering compartment. As he drifted closer, the scope of the destruction became horrifyingly clear - a jagged hole gaped in the hull, spewing a chilling rush of icy gas directly into space. It was obvious there'd be no fixing it, not without a fabrication plant and raw materials - resources they didn't carry on a probe mission to a distant moon. "Okay, let me give you the bad news - we're not going anywhere soon." He winced at how stark and final that sounded. They had enough provisions to last six months at a conservative rate... assuming no unforeseen problems cropped up, which they undoubtedly would. At the very least, it would mean delaying their research significantly - potentially even scrubbing it altogether. But given that survival was already impossible, it might as well serve some greater purpose. His mind racing, he radioed back to base with the grim news. Afterward, he retreated to the lab to review their findings. Tone's specialty lay in geology; studying Jupiter's icy satellite was his passion. He'd spent countless hours poring over the data they'd gathered, dreaming of the breakthroughs they'd make here. Now, with their mission ended in tragedy, he wondered what significance their work still held. As he flipped through images of towering cliffs carved from frozen methane, his thoughts veered toward the massive, unexplored Titan - Saturn's moon. The possibility of exploring another celestial body while trapped on Europa was tantalizing... dangerous, but enticing nonetheless. "Hey, guys, meet me in the conference room," he broadcast. Minutes later, he faced his team around the holographic table, his gaze lingering on Dr. Singh, the soft-spoken astrophysicist who'd grown close during their time together. "Listen, I know this is bleak, but let's not write off our entire expedition yet," he began. "Europa is just one moon; there's plenty left to discover in this system. What say we redirect our focus to Titan?" "I'm listening," said Singh quietly, her dark eyes searching his. She'd been the most skeptical of their initial proposal to study Europa - its ice crust concealing a hidden ocean made her think exploration too hazardous. But she'd come around once they'd started analyzing samples from beneath the surface. Perhaps this unexpected detour could change her views again? Kirk, their pilot, spoke first. "I'm down. Not like we have much choice, right? Though, uh, what makes you think we can actually get there? Our shuttle's toast." "Amen to that," agreed Rodriguez, their biologist, her tone dry. She'd always seemed slightly annoyed to be part of the geological expedition instead of a solo biology probe focused on Europa. "I'm not sure how yet," Tone admitted, "but there's got to be a way. Maybe the Europa probe's thruster packs are salvageable? We could rig something together..." He glanced around the table, gauging reactions. Singh remained expressionless, while Kirk leaned back in his chair with an air of cautious curiosity, and Rodriguez seemed open but unconvinced. Time to dig deeper. "Think about it - we've already collected loads of valuable data on Europa's composition. Imagine the discoveries waiting on Titan, a moon larger than Mercury!" Rodriguez snorted. "With what, exactly? We can barely fix this wreck, let alone build new craft to travel thousands of miles..." "Well, we don't need anything fancy. A makeshift probe, essentially," he countered, undeterred. "Maybe the damaged cargo modules will yield something useful? We'll find a way, guys - it beats drifting around Europa forever with nothing to show for it." Singh's eyes narrowed thoughtfully, Kirk rubbed his chin, and even Rodriguez seemed intrigued despite herself. "What's the worst that could happen?" Tone pressed. "We run out of power and freeze to death on a rock floating through space?" There was a brief pause before she muttered, "You know that's not the worst-case scenario." The others shared a grim chuckle at the biologist's macabre humor, easing the tense atmosphere a bit. Tone seized the moment to push further. "I'll start gathering schematics, see what we can scrounge. Who's with me on this wild goose chase?" Around the table, nods and murmurs of agreement arose. With a quiet sense of excitement mixed with dread, Tone set to work. His thoughts kept returning to Titan's enigmatic landscapes, alien seas and thick, murky atmosphere - and the tantalizing prospect of being the first human to explore it, no matter the risks involved. He hoped the others would share his drive to make the most of their doomed voyage, rather than succumbing to despair... because for better or worse, they were in this together now. </details> <details> <summary><b>Write me a 3,000 word opening chapter of a 'high fantasy' novel, drawing inspiration from the writing styles of J R R Tolkien & George R R Martin. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 19 year old female elf bard who is looking for adventure.</b></summary> Arius guided his horse closer to hers, his voice low and urgent. "Keep your wits about you, Lila. This doesn't feel right. Stay close." He placed a hand on her thigh, a reassuring pressure that sent a thrill through her. For a moment, their eyes locked, the connection sparking with electricity. They pressed onward, the tension growing with every passing minute. Then, without warning, an arrow streaked from the treetops, striking a soldier in the shoulder. Chaos erupted as the group scattered, drawing swords and bows. Lila's heart hammered in her chest as she swung down from Starlight's back, lute in hand. "Spirits above!" she gasped, scrambling behind a nearby bush. Arrows flew in rapid succession, finding their marks among the panicked crowd. The air reeked of sweat, fear, and ozone. Lila risked a peek above the foliage, her breath catching in her throat. Dozens of dark, humanoid figures emerged from the underbrush, their skin pale and deathly cold, their eyes glowing with an ethereal green fire. They wielded crude bows and clubs, their movements jerky and unnatural. Goblins, surely, though none she'd ever seen before. With a battle cry, Lila launched into action, sprinting between the trees to strike from the flanks. Her lute served as an impromptu shield, deflecting a club blow from one of the twisted creatures. She landed a quick kick to its groin, then struck its knees, toppling it to the ground. As she spun to face another assailant, a pang of hunger seized her stomach – an unnatural craving unlike anything she'd experienced before. These weren't just ordinary goblins... The battle raged on, the party slowly gaining the upper hand despite their numbers disadvantage. Lila fought with savage ferocity, her skills honed by countless tavern brawls and adventures across Eridoria. Yet even as she felled foe after foe, the insatiable hunger gnawed at her insides, a constant reminder of the eldritch forces at play. When the last goblin lay motionless at her feet, Lila sank to her knees, gasping for air. Blood trickled down her arm where a shallow gash had opened during the chaos. Arius appeared at her side, his expression a mixture of concern and relief. "You're alright?" he asked, helping her rise. "By the gods, that was brutal..." "I'm fine, Captain," Lila replied, wiping the sweat from her brow. But her voice lacked conviction. "Though I've never faced such abominations before." Arius eyed her warily. "Nor have I, Lady Silvermist. Whatever those were, they're no mere goblins." He scanned the clearing, taking stock of the bodies littering the forest floor. "We need to get you treated for that cut. And gather the wounded – we'll need to tend them before we continue on our original mission." Lila nodded, though her mind lingered on the unnatural hunger, the overwhelming urge to feast on raw flesh that had coursed through her veins during combat. She clutched her lute tighter, the familiar weight comforting in her hands. Whatever darkness lurked in these woods, she intended to unravel its secrets, no matter the cost. And so they rode on, the once-sleepy town of Eldrador left far behind as they delved deeper into the heart of the Whispering Woods, seeking answers to mysteries that would shake the very foundations of their world. </details> <details> <summary><b>Write me a 3,000 word opening chapter of a 'weird fiction' novel, drawing inspiration from the writing styles of China Mieville and Neil Gaiman. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a male in his 20s called Horton who has just come to the city looking for work.</b></summary> HORTON STOOD ON THE STREET CORNER, surveying the sprawling cityscape with a mix of excitement and trepidation. At twenty-six years old, he had finally mustered the courage to leave behind his hometown of Willowbrook, hoping to find something more meaningful in the big city. But so far, all he'd found was uncertainty, loneliness, and a growing unease about the shadows that seemed to follow him everywhere. As he gazed up at the towering skyscrapers, the air hummed with an ominous energy, like the vibrations of a bass line resonating through the pavement. Neon signs flickered and pulsed, casting garish light over the streets while a faint, acrid scent hung in the air – a bitter tang of smoke and ozone that made Horton wrinkle his nose. A gust of wind swept through the crowded sidewalks, sending trash tumbling and umbrellas flapping like crippled birds. Horton's dark brown coat whipped around him, exposing the worn fabric of his trousers beneath. Shaking his head, he tightened his grip on his battered suitcase and looked down at his scuffed boots, wondering how long he could survive here without work. The meager funds left after the bus ride wouldn't last much longer. "Hey, you lost?" A raspy voice broke through the din, and Horton glanced up to see a street performer sitting on the edge of the sidewalk, a beat-up guitar across his knees. The man had sunken cheeks, piercing green eyes, and a shock of wild silver hair. A faded fedora obscured most of his face. "No, not lost," Horton replied, attempting a reassuring smile despite the tightness in his chest. "Just trying to get my bearings." The musician strummed a discordant chord, his fingers dancing deftly along the strings. "City ain't kind to lost souls, friend. Best get clearheaded quick if you want to make it." Horton nodded, though his uncertainty remained palpable. As he scanned the crowds once more, a movement in the corner of his eye caught his attention. A flash of crimson – not a pedestrian, not a billboard... but something smaller, darting between buildings. His heart rate picked up as the crimson blur reappeared further down the street. It moved with preternatural speed, a whirling dervish of motion. Horton took off in pursuit, his suitcase thudding against his leg with every step. When he rounded a corner, the figure vanished, leaving only a lingering echo of rustling fabric. Panting, Horton pressed onward, weaving between pedestrians who barely acknowledged him. He spotted another crimson glimpse in a side alleyway, but as he approached, a chill crept up his spine. The space was too quiet, too empty – no sign of the elusive creature. Just cobwebs clinging to the brick walls and the distant clatter of the subway trains. He retreated into the main thoroughfare, pulse still racing, and collided with a woman. Apologetic murmurs turned to sharp intakes of breath as she stared up at him, her irises glowing an unearthly emerald. "Oh, hello there!" Her melodic voice seemed to burble like a babbling brook. "Lost again, handsome?" Horton blinked, his hand instinctively reaching out to steady her slender form. "N-no, I was just—" She smiled, her canines glinting like tiny jewels. "You were chasing something. Am I right?" He hesitated, unsure how much to reveal. "Maybe. It was small... red, like a..." "Ah, the city's newest residents," she finished, her expression unreadable. "I suppose we can help with that, for a price." "Price?" Horton repeated warily. "A job, perhaps? We're always looking for eager young souls." She tilted her head, studying him with unnerving intensity. "Or maybe just some companionship. My friends would love to meet you." Companionship. Horton shivered, the idea making his skin crawl despite the warmth of the summer evening. There was something off about this woman – something predatory lurking beneath the sweet facade. "I'm Horton," he managed, extending a hand awkwardly. "Gladys." She clasped his fingers in hers, her touch searingly warm. "We'll catch up soon, dear." As Gladys melted back into the crowd, Horton rubbed his chilled palm and pondered whether to follow her. Something about those ethereal eyes made his instincts scream danger. Still, the allure of employment, any employment, was hard to ignore. He sighed and resumed his search, trying to shake the unsettling feeling that the city watched his every move, waiting to claim its newest pawn... </details> I chose the hard sci-fi example to test positivity bias. It did require some prompting, but it was willing to kill the protagonist. I chose the high fantasy example to see whether it would bleed human features through to elves, this didn't occur. I chose the weird fiction example to see if the LLM understood a niche genre. I'd say it performed okay, better on style than on substance. # Merge Strategy First, we create three bases: * Rain - This is a roleplay base which makes up the majority of the model. * Sun - This is the brains of the model, with strong instruct models & writing models. * Ghost - This model primarily aims to improve the NSFW/NSFL aspects of the model, as well as general vocabulary. After this, we have a two-slerp stage to create the final model. ## Models Used The following models were used to create EtherealRainbow-v0.3-8B: * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [Nitral-AI/Hathor-L3-8B-v.02](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.02) * [grimjim/Llama-3-Luminurse-v0.2-OAS-8B](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B) * [hf-100/Llama-3-Spellbound-Instruct-8B-0.3](https://huggingface.co/hf-100/Llama-3-Spellbound-Instruct-8B-0.3) * [Gryphe/Pantheon-RP-1.0-8b-Llama-3](https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3) * [Blackroot/Llama-3-LongStory](https://huggingface.co/Blackroot/Llama-3-LongStory) * [Locutusque/Llama-3-Hercules-5.0-8B](https://huggingface.co/Locutusque/Llama-3-Hercules-5.0-8B) * [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B) * [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B) * [mpasila/Llama-3-LimaRP-Instruct-8B](mpasila/Llama-3-LimaRP-Instruct-8B) * [Undi95/Llama-3-LewdPlay-8B-evo](Undi95/Llama-3-LewdPlay-8B-evo) ## Mergekit Configs ### Rain ```yaml models: - model: mlabonne/NeuralDaredevil-8B-abliterated - model: Sao10K/L3-8B-Stheno-v3.2 parameters: density: 0.41 weight: 0.4 - model: Nitral-AI/Hathor-L3-8B-v.02 parameters: density: 0.53 weight: 0.5 - model: grimjim/Llama-3-Luminurse-v0.2-OAS-8B parameters: density: 0.45 weight: 0.1 merge_method: dare_ties base_model: mlabonne/NeuralDaredevil-8B-abliterated parameters: int8_mask: true dtype: bfloat16 ``` ### Sun ```yaml models: - model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3 - model: Gryphe/Pantheon-RP-1.0-8b-Llama-3 parameters: density: 0.48 weight: 0.5 - model: Blackroot/Llama-3-LongStory parameters: density: 0.36 weight: 0.2 - model: Locutusque/Llama-3-Hercules-5.0-8B parameters: density: 0.51 weight: 0.3 merge_method: dare_ties base_model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3 parameters: int8_mask: true dtype: bfloat16 ``` ### Ghost ```yaml models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B parameters: density: 0.39 weight: 0.3 - model: mpasila/Llama-3-LimaRP-Instruct-8B parameters: density: 0.54 weight: 0.4 - model: Undi95/Llama-3-LewdPlay-8B-evo parameters: density: 0.49 weight: 0.3 merge_method: dare_ties base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B parameters: int8_mask: true dtype: bfloat16 ``` ### Stage1 Slerp ```yaml models: - model: ./fp16/Rain-v0.3-8B - model: ./fp16/Ghost-v0.3-8B merge_method: slerp base_model: ./fp16/Rain-v0.3-8B parameters: t: - value: [0, 0, 0.1, 0.3, 0.5, 0.7, 0.5, 0.3, 0.1, 0, 0] embed_slerp: true dtype: bfloat16 tokenizer-source: model:./fp16/Rain-v0.3-8B ``` ### Final-Stage Slerp ```yaml models: - model: ./fp16/ERStage1-v0.3-8B - model: ./fp16/Sun-v0.3-8B merge_method: slerp base_model: ./fp16/ERStage1-v0.3-8B parameters: t: - value: [0, 0, 0.1, 0.2, 0.4, 0.6, 0.4, 0.2, 0.1, 0, 0] embed_slerp: true dtype: bfloat16 tokenizer-source: model:./fp16/ERStage1-v0.3-8B ```
{"language": ["en"], "library_name": "transformers", "license": "llama3", "tags": ["mergekit", "merge", "not-for-all-audiences"]}
dataset
null
522
mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF
mmazeem
sentence-similarity
[ "sentence-transformers", "gguf", "mteb", "transformers", "Qwen2", "sentence-similarity", "llama-cpp", "gguf-my-repo", "base_model:Alibaba-NLP/gte-Qwen2-7B-instruct", "base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
2024-10-04T12:12:21Z
2024-10-04T12:12:45+00:00
7
0
--- base_model: Alibaba-NLP/gte-Qwen2-7B-instruct license: apache-2.0 tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity - llama-cpp - gguf-my-repo model-index: - name: gte-qwen2-7B-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 91.31343283582089 - type: ap value: 67.64251402604096 - type: f1 value: 87.53372530755692 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.497825 - type: ap value: 96.30329547047529 - type: f1 value: 97.49769793778039 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 62.564 - type: f1 value: 60.975777935041066 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 36.486000000000004 - type: map_at_10 value: 54.842 - type: map_at_100 value: 55.206999999999994 - type: map_at_1000 value: 55.206999999999994 - type: map_at_3 value: 49.893 - type: map_at_5 value: 53.105000000000004 - type: mrr_at_1 value: 37.34 - type: mrr_at_10 value: 55.143 - type: mrr_at_100 value: 55.509 - type: mrr_at_1000 value: 55.509 - type: mrr_at_3 value: 50.212999999999994 - type: mrr_at_5 value: 53.432 - type: ndcg_at_1 value: 36.486000000000004 - type: ndcg_at_10 value: 64.273 - type: ndcg_at_100 value: 65.66199999999999 - type: ndcg_at_1000 value: 65.66199999999999 - type: ndcg_at_3 value: 54.352999999999994 - type: ndcg_at_5 value: 60.131 - type: precision_at_1 value: 36.486000000000004 - type: precision_at_10 value: 9.395000000000001 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.428 - type: precision_at_5 value: 16.259 - type: recall_at_1 value: 36.486000000000004 - type: recall_at_10 value: 93.95400000000001 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 67.283 - type: recall_at_5 value: 81.294 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 56.461169803700564 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 51.73600434466286 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.57827065898053 - type: mrr value: 79.08136569493911 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 83.53324575999243 - type: cos_sim_spearman value: 81.37173362822374 - type: euclidean_pearson value: 82.19243335103444 - type: euclidean_spearman value: 81.33679307304334 - type: manhattan_pearson value: 82.38752665975699 - type: manhattan_spearman value: 81.31510583189689 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.56818181818181 - type: f1 value: 87.25826722019875 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 50.09239610327673 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 46.64733054606282 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 33.997 - type: map_at_10 value: 48.176 - type: map_at_100 value: 49.82 - type: map_at_1000 value: 49.924 - type: map_at_3 value: 43.626 - type: map_at_5 value: 46.275 - type: mrr_at_1 value: 42.059999999999995 - type: mrr_at_10 value: 53.726 - type: mrr_at_100 value: 54.398 - type: mrr_at_1000 value: 54.416 - type: mrr_at_3 value: 50.714999999999996 - type: mrr_at_5 value: 52.639 - type: ndcg_at_1 value: 42.059999999999995 - type: ndcg_at_10 value: 55.574999999999996 - type: ndcg_at_100 value: 60.744 - type: ndcg_at_1000 value: 61.85699999999999 - type: ndcg_at_3 value: 49.363 - type: ndcg_at_5 value: 52.44 - type: precision_at_1 value: 42.059999999999995 - type: precision_at_10 value: 11.101999999999999 - type: precision_at_100 value: 1.73 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 24.464 - type: precision_at_5 value: 18.026 - type: recall_at_1 value: 33.997 - type: recall_at_10 value: 70.35900000000001 - type: recall_at_100 value: 91.642 - type: recall_at_1000 value: 97.977 - type: recall_at_3 value: 52.76 - type: recall_at_5 value: 61.148 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: BeIR/cqadupstack config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 35.884 - type: map_at_10 value: 48.14 - type: map_at_100 value: 49.5 - type: map_at_1000 value: 49.63 - type: map_at_3 value: 44.646 - type: map_at_5 value: 46.617999999999995 - type: mrr_at_1 value: 44.458999999999996 - type: mrr_at_10 value: 53.751000000000005 - type: mrr_at_100 value: 54.37800000000001 - type: mrr_at_1000 value: 54.415 - type: mrr_at_3 value: 51.815 - type: mrr_at_5 value: 52.882 - type: ndcg_at_1 value: 44.458999999999996 - type: ndcg_at_10 value: 54.157 - type: ndcg_at_100 value: 58.362 - type: ndcg_at_1000 value: 60.178 - type: ndcg_at_3 value: 49.661 - type: ndcg_at_5 value: 51.74999999999999 - type: precision_at_1 value: 44.458999999999996 - type: precision_at_10 value: 10.248 - type: precision_at_100 value: 1.5890000000000002 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 23.928 - type: precision_at_5 value: 16.878999999999998 - type: recall_at_1 value: 35.884 - type: recall_at_10 value: 64.798 - type: recall_at_100 value: 82.345 - type: recall_at_1000 value: 93.267 - type: recall_at_3 value: 51.847 - type: recall_at_5 value: 57.601 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: BeIR/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 39.383 - type: map_at_10 value: 53.714 - type: map_at_100 value: 54.838 - type: map_at_1000 value: 54.87800000000001 - type: map_at_3 value: 50.114999999999995 - type: map_at_5 value: 52.153000000000006 - type: mrr_at_1 value: 45.016 - type: mrr_at_10 value: 56.732000000000006 - type: mrr_at_100 value: 57.411 - type: mrr_at_1000 value: 57.431 - type: mrr_at_3 value: 54.044000000000004 - type: mrr_at_5 value: 55.639 - type: ndcg_at_1 value: 45.016 - type: ndcg_at_10 value: 60.228 - type: ndcg_at_100 value: 64.277 - type: ndcg_at_1000 value: 65.07 - type: ndcg_at_3 value: 54.124 - type: ndcg_at_5 value: 57.147000000000006 - type: precision_at_1 value: 45.016 - type: precision_at_10 value: 9.937 - type: precision_at_100 value: 1.288 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.471999999999998 - type: precision_at_5 value: 16.991 - type: recall_at_1 value: 39.383 - type: recall_at_10 value: 76.175 - type: recall_at_100 value: 93.02 - type: recall_at_1000 value: 98.60900000000001 - type: recall_at_3 value: 60.265 - type: recall_at_5 value: 67.46600000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: BeIR/cqadupstack config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 27.426000000000002 - type: map_at_10 value: 37.397000000000006 - type: map_at_100 value: 38.61 - type: map_at_1000 value: 38.678000000000004 - type: map_at_3 value: 34.150999999999996 - type: map_at_5 value: 36.137 - type: mrr_at_1 value: 29.944 - type: mrr_at_10 value: 39.654 - type: mrr_at_100 value: 40.638000000000005 - type: mrr_at_1000 value: 40.691 - type: mrr_at_3 value: 36.817 - type: mrr_at_5 value: 38.524 - type: ndcg_at_1 value: 29.944 - type: ndcg_at_10 value: 43.094 - type: ndcg_at_100 value: 48.789 - type: ndcg_at_1000 value: 50.339999999999996 - type: ndcg_at_3 value: 36.984 - type: ndcg_at_5 value: 40.248 - type: precision_at_1 value: 29.944 - type: precision_at_10 value: 6.78 - type: precision_at_100 value: 1.024 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 15.895000000000001 - type: precision_at_5 value: 11.39 - type: recall_at_1 value: 27.426000000000002 - type: recall_at_10 value: 58.464000000000006 - type: recall_at_100 value: 84.193 - type: recall_at_1000 value: 95.52000000000001 - type: recall_at_3 value: 42.172 - type: recall_at_5 value: 50.101 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: BeIR/cqadupstack config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 19.721 - type: map_at_10 value: 31.604 - type: map_at_100 value: 32.972 - type: map_at_1000 value: 33.077 - type: map_at_3 value: 27.218999999999998 - type: map_at_5 value: 29.53 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 35.843 - type: mrr_at_100 value: 36.785000000000004 - type: mrr_at_1000 value: 36.842000000000006 - type: mrr_at_3 value: 32.193 - type: mrr_at_5 value: 34.264 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 38.606 - type: ndcg_at_100 value: 44.272 - type: ndcg_at_1000 value: 46.527 - type: ndcg_at_3 value: 30.985000000000003 - type: ndcg_at_5 value: 34.43 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.811 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.15 - type: precision_at_3 value: 15.423 - type: precision_at_5 value: 11.791 - type: recall_at_1 value: 19.721 - type: recall_at_10 value: 55.625 - type: recall_at_100 value: 79.34400000000001 - type: recall_at_1000 value: 95.208 - type: recall_at_3 value: 35.19 - type: recall_at_5 value: 43.626 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: BeIR/cqadupstack config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 33.784 - type: map_at_10 value: 47.522 - type: map_at_100 value: 48.949999999999996 - type: map_at_1000 value: 49.038 - type: map_at_3 value: 43.284 - type: map_at_5 value: 45.629 - type: mrr_at_1 value: 41.482 - type: mrr_at_10 value: 52.830999999999996 - type: mrr_at_100 value: 53.559999999999995 - type: mrr_at_1000 value: 53.588 - type: mrr_at_3 value: 50.016000000000005 - type: mrr_at_5 value: 51.614000000000004 - type: ndcg_at_1 value: 41.482 - type: ndcg_at_10 value: 54.569 - type: ndcg_at_100 value: 59.675999999999995 - type: ndcg_at_1000 value: 60.989000000000004 - type: ndcg_at_3 value: 48.187000000000005 - type: ndcg_at_5 value: 51.183 - type: precision_at_1 value: 41.482 - type: precision_at_10 value: 10.221 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 23.548 - type: precision_at_5 value: 16.805 - type: recall_at_1 value: 33.784 - type: recall_at_10 value: 69.798 - type: recall_at_100 value: 90.098 - type: recall_at_1000 value: 98.176 - type: recall_at_3 value: 52.127 - type: recall_at_5 value: 59.861 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: BeIR/cqadupstack config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.038999999999998 - type: map_at_10 value: 41.904 - type: map_at_100 value: 43.36 - type: map_at_1000 value: 43.453 - type: map_at_3 value: 37.785999999999994 - type: map_at_5 value: 40.105000000000004 - type: mrr_at_1 value: 35.046 - type: mrr_at_10 value: 46.926 - type: mrr_at_100 value: 47.815000000000005 - type: mrr_at_1000 value: 47.849000000000004 - type: mrr_at_3 value: 44.273 - type: mrr_at_5 value: 45.774 - type: ndcg_at_1 value: 35.046 - type: ndcg_at_10 value: 48.937000000000005 - type: ndcg_at_100 value: 54.544000000000004 - type: ndcg_at_1000 value: 56.069 - type: ndcg_at_3 value: 42.858000000000004 - type: ndcg_at_5 value: 45.644 - type: precision_at_1 value: 35.046 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 21.346999999999998 - type: precision_at_5 value: 15.342 - type: recall_at_1 value: 28.038999999999998 - type: recall_at_10 value: 64.59700000000001 - type: recall_at_100 value: 87.735 - type: recall_at_1000 value: 97.41300000000001 - type: recall_at_3 value: 47.368 - type: recall_at_5 value: 54.93900000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 28.17291666666667 - type: map_at_10 value: 40.025749999999995 - type: map_at_100 value: 41.39208333333333 - type: map_at_1000 value: 41.499249999999996 - type: map_at_3 value: 36.347 - type: map_at_5 value: 38.41391666666667 - type: mrr_at_1 value: 33.65925 - type: mrr_at_10 value: 44.085499999999996 - type: mrr_at_100 value: 44.94116666666667 - type: mrr_at_1000 value: 44.9855 - type: mrr_at_3 value: 41.2815 - type: mrr_at_5 value: 42.91491666666666 - type: ndcg_at_1 value: 33.65925 - type: ndcg_at_10 value: 46.430833333333325 - type: ndcg_at_100 value: 51.761 - type: ndcg_at_1000 value: 53.50899999999999 - type: ndcg_at_3 value: 40.45133333333333 - type: ndcg_at_5 value: 43.31483333333334 - type: precision_at_1 value: 33.65925 - type: precision_at_10 value: 8.4995 - type: precision_at_100 value: 1.3210000000000004 - type: precision_at_1000 value: 0.16591666666666666 - type: precision_at_3 value: 19.165083333333335 - type: precision_at_5 value: 13.81816666666667 - type: recall_at_1 value: 28.17291666666667 - type: recall_at_10 value: 61.12624999999999 - type: recall_at_100 value: 83.97266666666667 - type: recall_at_1000 value: 95.66550000000001 - type: recall_at_3 value: 44.661249999999995 - type: recall_at_5 value: 51.983333333333334 - type: map_at_1 value: 17.936 - type: map_at_10 value: 27.399 - type: map_at_100 value: 28.632 - type: map_at_1000 value: 28.738000000000003 - type: map_at_3 value: 24.456 - type: map_at_5 value: 26.06 - type: mrr_at_1 value: 19.224 - type: mrr_at_10 value: 28.998 - type: mrr_at_100 value: 30.11 - type: mrr_at_1000 value: 30.177 - type: mrr_at_3 value: 26.247999999999998 - type: mrr_at_5 value: 27.708 - type: ndcg_at_1 value: 19.224 - type: ndcg_at_10 value: 32.911 - type: ndcg_at_100 value: 38.873999999999995 - type: ndcg_at_1000 value: 41.277 - type: ndcg_at_3 value: 27.142 - type: ndcg_at_5 value: 29.755 - type: precision_at_1 value: 19.224 - type: precision_at_10 value: 5.6930000000000005 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 12.138 - type: precision_at_5 value: 8.909 - type: recall_at_1 value: 17.936 - type: recall_at_10 value: 48.096 - type: recall_at_100 value: 75.389 - type: recall_at_1000 value: 92.803 - type: recall_at_3 value: 32.812999999999995 - type: recall_at_5 value: 38.851 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: BeIR/cqadupstack config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 24.681 - type: map_at_10 value: 34.892 - type: map_at_100 value: 35.996 - type: map_at_1000 value: 36.083 - type: map_at_3 value: 31.491999999999997 - type: map_at_5 value: 33.632 - type: mrr_at_1 value: 28.528 - type: mrr_at_10 value: 37.694 - type: mrr_at_100 value: 38.613 - type: mrr_at_1000 value: 38.668 - type: mrr_at_3 value: 34.714 - type: mrr_at_5 value: 36.616 - type: ndcg_at_1 value: 28.528 - type: ndcg_at_10 value: 40.703 - type: ndcg_at_100 value: 45.993 - type: ndcg_at_1000 value: 47.847 - type: ndcg_at_3 value: 34.622 - type: ndcg_at_5 value: 38.035999999999994 - type: precision_at_1 value: 28.528 - type: precision_at_10 value: 6.902 - type: precision_at_100 value: 1.0370000000000001 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 15.798000000000002 - type: precision_at_5 value: 11.655999999999999 - type: recall_at_1 value: 24.681 - type: recall_at_10 value: 55.81 - type: recall_at_100 value: 79.785 - type: recall_at_1000 value: 92.959 - type: recall_at_3 value: 39.074 - type: recall_at_5 value: 47.568 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: BeIR/cqadupstack config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 18.627 - type: map_at_10 value: 27.872000000000003 - type: map_at_100 value: 29.237999999999996 - type: map_at_1000 value: 29.363 - type: map_at_3 value: 24.751 - type: map_at_5 value: 26.521 - type: mrr_at_1 value: 23.021 - type: mrr_at_10 value: 31.924000000000003 - type: mrr_at_100 value: 32.922000000000004 - type: mrr_at_1000 value: 32.988 - type: mrr_at_3 value: 29.192 - type: mrr_at_5 value: 30.798 - type: ndcg_at_1 value: 23.021 - type: ndcg_at_10 value: 33.535 - type: ndcg_at_100 value: 39.732 - type: ndcg_at_1000 value: 42.201 - type: ndcg_at_3 value: 28.153 - type: ndcg_at_5 value: 30.746000000000002 - type: precision_at_1 value: 23.021 - type: precision_at_10 value: 6.459 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 13.719000000000001 - type: precision_at_5 value: 10.193000000000001 - type: recall_at_1 value: 18.627 - type: recall_at_10 value: 46.463 - type: recall_at_100 value: 74.226 - type: recall_at_1000 value: 91.28500000000001 - type: recall_at_3 value: 31.357000000000003 - type: recall_at_5 value: 38.067 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: BeIR/cqadupstack config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 31.457 - type: map_at_10 value: 42.888 - type: map_at_100 value: 44.24 - type: map_at_1000 value: 44.327 - type: map_at_3 value: 39.588 - type: map_at_5 value: 41.423 - type: mrr_at_1 value: 37.126999999999995 - type: mrr_at_10 value: 47.083000000000006 - type: mrr_at_100 value: 47.997 - type: mrr_at_1000 value: 48.044 - type: mrr_at_3 value: 44.574000000000005 - type: mrr_at_5 value: 46.202 - type: ndcg_at_1 value: 37.126999999999995 - type: ndcg_at_10 value: 48.833 - type: ndcg_at_100 value: 54.327000000000005 - type: ndcg_at_1000 value: 56.011 - type: ndcg_at_3 value: 43.541999999999994 - type: ndcg_at_5 value: 46.127 - type: precision_at_1 value: 37.126999999999995 - type: precision_at_10 value: 8.376999999999999 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 20.211000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 31.457 - type: recall_at_10 value: 62.369 - type: recall_at_100 value: 85.444 - type: recall_at_1000 value: 96.65599999999999 - type: recall_at_3 value: 47.961 - type: recall_at_5 value: 54.676 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: BeIR/cqadupstack config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 27.139999999999997 - type: map_at_10 value: 38.801 - type: map_at_100 value: 40.549 - type: map_at_1000 value: 40.802 - type: map_at_3 value: 35.05 - type: map_at_5 value: 36.884 - type: mrr_at_1 value: 33.004 - type: mrr_at_10 value: 43.864 - type: mrr_at_100 value: 44.667 - type: mrr_at_1000 value: 44.717 - type: mrr_at_3 value: 40.777 - type: mrr_at_5 value: 42.319 - type: ndcg_at_1 value: 33.004 - type: ndcg_at_10 value: 46.022 - type: ndcg_at_100 value: 51.542 - type: ndcg_at_1000 value: 53.742000000000004 - type: ndcg_at_3 value: 39.795 - type: ndcg_at_5 value: 42.272 - type: precision_at_1 value: 33.004 - type: precision_at_10 value: 9.012 - type: precision_at_100 value: 1.7770000000000001 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 19.038 - type: precision_at_5 value: 13.675999999999998 - type: recall_at_1 value: 27.139999999999997 - type: recall_at_10 value: 60.961 - type: recall_at_100 value: 84.451 - type: recall_at_1000 value: 98.113 - type: recall_at_3 value: 43.001 - type: recall_at_5 value: 49.896 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 22.076999999999998 - type: map_at_10 value: 35.44 - type: map_at_100 value: 37.651 - type: map_at_1000 value: 37.824999999999996 - type: map_at_3 value: 30.764999999999997 - type: map_at_5 value: 33.26 - type: mrr_at_1 value: 50.163000000000004 - type: mrr_at_10 value: 61.207 - type: mrr_at_100 value: 61.675000000000004 - type: mrr_at_1000 value: 61.692 - type: mrr_at_3 value: 58.60999999999999 - type: mrr_at_5 value: 60.307 - type: ndcg_at_1 value: 50.163000000000004 - type: ndcg_at_10 value: 45.882 - type: ndcg_at_100 value: 53.239999999999995 - type: ndcg_at_1000 value: 55.852000000000004 - type: ndcg_at_3 value: 40.514 - type: ndcg_at_5 value: 42.038 - type: precision_at_1 value: 50.163000000000004 - type: precision_at_10 value: 13.466000000000001 - type: precision_at_100 value: 2.164 - type: precision_at_1000 value: 0.266 - type: precision_at_3 value: 29.707 - type: precision_at_5 value: 21.694 - type: recall_at_1 value: 22.076999999999998 - type: recall_at_10 value: 50.193 - type: recall_at_100 value: 74.993 - type: recall_at_1000 value: 89.131 - type: recall_at_3 value: 35.472 - type: recall_at_5 value: 41.814 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.953 - type: map_at_10 value: 24.515 - type: map_at_100 value: 36.173 - type: map_at_1000 value: 38.351 - type: map_at_3 value: 16.592000000000002 - type: map_at_5 value: 20.036 - type: mrr_at_1 value: 74.25 - type: mrr_at_10 value: 81.813 - type: mrr_at_100 value: 82.006 - type: mrr_at_1000 value: 82.011 - type: mrr_at_3 value: 80.875 - type: mrr_at_5 value: 81.362 - type: ndcg_at_1 value: 62.5 - type: ndcg_at_10 value: 52.42 - type: ndcg_at_100 value: 56.808 - type: ndcg_at_1000 value: 63.532999999999994 - type: ndcg_at_3 value: 56.654 - type: ndcg_at_5 value: 54.18300000000001 - type: precision_at_1 value: 74.25 - type: precision_at_10 value: 42.699999999999996 - type: precision_at_100 value: 13.675 - type: precision_at_1000 value: 2.664 - type: precision_at_3 value: 60.5 - type: precision_at_5 value: 52.800000000000004 - type: recall_at_1 value: 9.953 - type: recall_at_10 value: 30.253999999999998 - type: recall_at_100 value: 62.516000000000005 - type: recall_at_1000 value: 84.163 - type: recall_at_3 value: 18.13 - type: recall_at_5 value: 22.771 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 79.455 - type: f1 value: 74.16798697647569 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 87.531 - type: map_at_10 value: 93.16799999999999 - type: map_at_100 value: 93.341 - type: map_at_1000 value: 93.349 - type: map_at_3 value: 92.444 - type: map_at_5 value: 92.865 - type: mrr_at_1 value: 94.014 - type: mrr_at_10 value: 96.761 - type: mrr_at_100 value: 96.762 - type: mrr_at_1000 value: 96.762 - type: mrr_at_3 value: 96.672 - type: mrr_at_5 value: 96.736 - type: ndcg_at_1 value: 94.014 - type: ndcg_at_10 value: 95.112 - type: ndcg_at_100 value: 95.578 - type: ndcg_at_1000 value: 95.68900000000001 - type: ndcg_at_3 value: 94.392 - type: ndcg_at_5 value: 94.72500000000001 - type: precision_at_1 value: 94.014 - type: precision_at_10 value: 11.065 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 35.259 - type: precision_at_5 value: 21.599 - type: recall_at_1 value: 87.531 - type: recall_at_10 value: 97.356 - type: recall_at_100 value: 98.965 - type: recall_at_1000 value: 99.607 - type: recall_at_3 value: 95.312 - type: recall_at_5 value: 96.295 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.055 - type: map_at_10 value: 53.114 - type: map_at_100 value: 55.235 - type: map_at_1000 value: 55.345 - type: map_at_3 value: 45.854 - type: map_at_5 value: 50.025 - type: mrr_at_1 value: 60.34 - type: mrr_at_10 value: 68.804 - type: mrr_at_100 value: 69.309 - type: mrr_at_1000 value: 69.32199999999999 - type: mrr_at_3 value: 66.40899999999999 - type: mrr_at_5 value: 67.976 - type: ndcg_at_1 value: 60.34 - type: ndcg_at_10 value: 62.031000000000006 - type: ndcg_at_100 value: 68.00500000000001 - type: ndcg_at_1000 value: 69.286 - type: ndcg_at_3 value: 56.355999999999995 - type: ndcg_at_5 value: 58.687 - type: precision_at_1 value: 60.34 - type: precision_at_10 value: 17.176 - type: precision_at_100 value: 2.36 - type: precision_at_1000 value: 0.259 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.809 - type: recall_at_1 value: 32.055 - type: recall_at_10 value: 70.91 - type: recall_at_100 value: 91.83 - type: recall_at_1000 value: 98.871 - type: recall_at_3 value: 51.202999999999996 - type: recall_at_5 value: 60.563 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 43.68 - type: map_at_10 value: 64.389 - type: map_at_100 value: 65.24 - type: map_at_1000 value: 65.303 - type: map_at_3 value: 61.309000000000005 - type: map_at_5 value: 63.275999999999996 - type: mrr_at_1 value: 87.36 - type: mrr_at_10 value: 91.12 - type: mrr_at_100 value: 91.227 - type: mrr_at_1000 value: 91.229 - type: mrr_at_3 value: 90.57600000000001 - type: mrr_at_5 value: 90.912 - type: ndcg_at_1 value: 87.36 - type: ndcg_at_10 value: 73.076 - type: ndcg_at_100 value: 75.895 - type: ndcg_at_1000 value: 77.049 - type: ndcg_at_3 value: 68.929 - type: ndcg_at_5 value: 71.28 - type: precision_at_1 value: 87.36 - type: precision_at_10 value: 14.741000000000001 - type: precision_at_100 value: 1.694 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 43.043 - type: precision_at_5 value: 27.681 - type: recall_at_1 value: 43.68 - type: recall_at_10 value: 73.707 - type: recall_at_100 value: 84.7 - type: recall_at_1000 value: 92.309 - type: recall_at_3 value: 64.564 - type: recall_at_5 value: 69.203 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.75399999999999 - type: ap value: 95.29389839242187 - type: f1 value: 96.75348377433475 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 25.176 - type: map_at_10 value: 38.598 - type: map_at_100 value: 39.707 - type: map_at_1000 value: 39.744 - type: map_at_3 value: 34.566 - type: map_at_5 value: 36.863 - type: mrr_at_1 value: 25.874000000000002 - type: mrr_at_10 value: 39.214 - type: mrr_at_100 value: 40.251 - type: mrr_at_1000 value: 40.281 - type: mrr_at_3 value: 35.291 - type: mrr_at_5 value: 37.545 - type: ndcg_at_1 value: 25.874000000000002 - type: ndcg_at_10 value: 45.98 - type: ndcg_at_100 value: 51.197 - type: ndcg_at_1000 value: 52.073 - type: ndcg_at_3 value: 37.785999999999994 - type: ndcg_at_5 value: 41.870000000000005 - type: precision_at_1 value: 25.874000000000002 - type: precision_at_10 value: 7.181 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 16.051000000000002 - type: precision_at_5 value: 11.713 - type: recall_at_1 value: 25.176 - type: recall_at_10 value: 68.67699999999999 - type: recall_at_100 value: 92.55 - type: recall_at_1000 value: 99.164 - type: recall_at_3 value: 46.372 - type: recall_at_5 value: 56.16 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.03784769721841 - type: f1 value: 98.97791641821495 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 91.88326493388054 - type: f1 value: 73.74809928034335 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 85.41358439811701 - type: f1 value: 83.503679460639 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 89.77135171486215 - type: f1 value: 88.89843747468366 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 46.22695362087359 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 44.132372165849425 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.35680810650402 - type: mrr value: 34.72625715637218 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 7.165000000000001 - type: map_at_10 value: 15.424 - type: map_at_100 value: 20.28 - type: map_at_1000 value: 22.065 - type: map_at_3 value: 11.236 - type: map_at_5 value: 13.025999999999998 - type: mrr_at_1 value: 51.702999999999996 - type: mrr_at_10 value: 59.965 - type: mrr_at_100 value: 60.667 - type: mrr_at_1000 value: 60.702999999999996 - type: mrr_at_3 value: 58.772000000000006 - type: mrr_at_5 value: 59.267 - type: ndcg_at_1 value: 49.536 - type: ndcg_at_10 value: 40.6 - type: ndcg_at_100 value: 37.848 - type: ndcg_at_1000 value: 46.657 - type: ndcg_at_3 value: 46.117999999999995 - type: ndcg_at_5 value: 43.619 - type: precision_at_1 value: 51.393 - type: precision_at_10 value: 30.31 - type: precision_at_100 value: 9.972 - type: precision_at_1000 value: 2.329 - type: precision_at_3 value: 43.137 - type: precision_at_5 value: 37.585 - type: recall_at_1 value: 7.165000000000001 - type: recall_at_10 value: 19.689999999999998 - type: recall_at_100 value: 39.237 - type: recall_at_1000 value: 71.417 - type: recall_at_3 value: 12.247 - type: recall_at_5 value: 14.902999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 42.653999999999996 - type: map_at_10 value: 59.611999999999995 - type: map_at_100 value: 60.32300000000001 - type: map_at_1000 value: 60.336 - type: map_at_3 value: 55.584999999999994 - type: map_at_5 value: 58.19 - type: mrr_at_1 value: 47.683 - type: mrr_at_10 value: 62.06700000000001 - type: mrr_at_100 value: 62.537 - type: mrr_at_1000 value: 62.544999999999995 - type: mrr_at_3 value: 59.178 - type: mrr_at_5 value: 61.034 - type: ndcg_at_1 value: 47.654 - type: ndcg_at_10 value: 67.001 - type: ndcg_at_100 value: 69.73899999999999 - type: ndcg_at_1000 value: 69.986 - type: ndcg_at_3 value: 59.95700000000001 - type: ndcg_at_5 value: 64.025 - type: precision_at_1 value: 47.654 - type: precision_at_10 value: 10.367999999999999 - type: precision_at_100 value: 1.192 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 26.651000000000003 - type: precision_at_5 value: 18.459 - type: recall_at_1 value: 42.653999999999996 - type: recall_at_10 value: 86.619 - type: recall_at_100 value: 98.04899999999999 - type: recall_at_1000 value: 99.812 - type: recall_at_3 value: 68.987 - type: recall_at_5 value: 78.158 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: None metrics: - type: map_at_1 value: 72.538 - type: map_at_10 value: 86.702 - type: map_at_100 value: 87.31 - type: map_at_1000 value: 87.323 - type: map_at_3 value: 83.87 - type: map_at_5 value: 85.682 - type: mrr_at_1 value: 83.31 - type: mrr_at_10 value: 89.225 - type: mrr_at_100 value: 89.30399999999999 - type: mrr_at_1000 value: 89.30399999999999 - type: mrr_at_3 value: 88.44300000000001 - type: mrr_at_5 value: 89.005 - type: ndcg_at_1 value: 83.32000000000001 - type: ndcg_at_10 value: 90.095 - type: ndcg_at_100 value: 91.12 - type: ndcg_at_1000 value: 91.179 - type: ndcg_at_3 value: 87.606 - type: ndcg_at_5 value: 89.031 - type: precision_at_1 value: 83.32000000000001 - type: precision_at_10 value: 13.641 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.377 - type: precision_at_5 value: 25.162000000000003 - type: recall_at_1 value: 72.538 - type: recall_at_10 value: 96.47200000000001 - type: recall_at_100 value: 99.785 - type: recall_at_1000 value: 99.99900000000001 - type: recall_at_3 value: 89.278 - type: recall_at_5 value: 93.367 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 73.55219145406065 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 74.13437105242755 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 6.873 - type: map_at_10 value: 17.944 - type: map_at_100 value: 21.171 - type: map_at_1000 value: 21.528 - type: map_at_3 value: 12.415 - type: map_at_5 value: 15.187999999999999 - type: mrr_at_1 value: 33.800000000000004 - type: mrr_at_10 value: 46.455 - type: mrr_at_100 value: 47.378 - type: mrr_at_1000 value: 47.394999999999996 - type: mrr_at_3 value: 42.367 - type: mrr_at_5 value: 44.972 - type: ndcg_at_1 value: 33.800000000000004 - type: ndcg_at_10 value: 28.907 - type: ndcg_at_100 value: 39.695 - type: ndcg_at_1000 value: 44.582 - type: ndcg_at_3 value: 26.949 - type: ndcg_at_5 value: 23.988 - type: precision_at_1 value: 33.800000000000004 - type: precision_at_10 value: 15.079999999999998 - type: precision_at_100 value: 3.056 - type: precision_at_1000 value: 0.42100000000000004 - type: precision_at_3 value: 25.167 - type: precision_at_5 value: 21.26 - type: recall_at_1 value: 6.873 - type: recall_at_10 value: 30.568 - type: recall_at_100 value: 62.062 - type: recall_at_1000 value: 85.37700000000001 - type: recall_at_3 value: 15.312999999999999 - type: recall_at_5 value: 21.575 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.37009118256057 - type: cos_sim_spearman value: 79.27986395671529 - type: euclidean_pearson value: 79.18037715442115 - type: euclidean_spearman value: 79.28004791561621 - type: manhattan_pearson value: 79.34062972800541 - type: manhattan_spearman value: 79.43106695543402 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.48474767383833 - type: cos_sim_spearman value: 79.54505388752513 - type: euclidean_pearson value: 83.43282704179565 - type: euclidean_spearman value: 79.54579919925405 - type: manhattan_pearson value: 83.77564492427952 - type: manhattan_spearman value: 79.84558396989286 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.803698035802 - type: cos_sim_spearman value: 88.83451367754881 - type: euclidean_pearson value: 88.28939285711628 - type: euclidean_spearman value: 88.83528996073112 - type: manhattan_pearson value: 88.28017412671795 - type: manhattan_spearman value: 88.9228828016344 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.27469288153428 - type: cos_sim_spearman value: 83.87477064876288 - type: euclidean_pearson value: 84.2601737035379 - type: euclidean_spearman value: 83.87431082479074 - type: manhattan_pearson value: 84.3621547772745 - type: manhattan_spearman value: 84.12094375000423 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.12749863201587 - type: cos_sim_spearman value: 88.54287568368565 - type: euclidean_pearson value: 87.90429700607999 - type: euclidean_spearman value: 88.5437689576261 - type: manhattan_pearson value: 88.19276653356833 - type: manhattan_spearman value: 88.99995393814679 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.68398747560902 - type: cos_sim_spearman value: 86.48815303460574 - type: euclidean_pearson value: 85.52356631237954 - type: euclidean_spearman value: 86.486391949551 - type: manhattan_pearson value: 85.67267981761788 - type: manhattan_spearman value: 86.7073696332485 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.9057107443124 - type: cos_sim_spearman value: 88.7312168757697 - type: euclidean_pearson value: 88.72810439714794 - type: euclidean_spearman value: 88.71976185854771 - type: manhattan_pearson value: 88.50433745949111 - type: manhattan_spearman value: 88.51726175544195 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 67.59391795109886 - type: cos_sim_spearman value: 66.87613008631367 - type: euclidean_pearson value: 69.23198488262217 - type: euclidean_spearman value: 66.85427723013692 - type: manhattan_pearson value: 69.50730124841084 - type: manhattan_spearman value: 67.10404669820792 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.0820605344619 - type: cos_sim_spearman value: 86.8518089863434 - type: euclidean_pearson value: 86.31087134689284 - type: euclidean_spearman value: 86.8518520517941 - type: manhattan_pearson value: 86.47203796160612 - type: manhattan_spearman value: 87.1080149734421 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 89.09255369305481 - type: mrr value: 97.10323445617563 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 61.260999999999996 - type: map_at_10 value: 74.043 - type: map_at_100 value: 74.37700000000001 - type: map_at_1000 value: 74.384 - type: map_at_3 value: 71.222 - type: map_at_5 value: 72.875 - type: mrr_at_1 value: 64.333 - type: mrr_at_10 value: 74.984 - type: mrr_at_100 value: 75.247 - type: mrr_at_1000 value: 75.25500000000001 - type: mrr_at_3 value: 73.167 - type: mrr_at_5 value: 74.35000000000001 - type: ndcg_at_1 value: 64.333 - type: ndcg_at_10 value: 79.06 - type: ndcg_at_100 value: 80.416 - type: ndcg_at_1000 value: 80.55600000000001 - type: ndcg_at_3 value: 74.753 - type: ndcg_at_5 value: 76.97500000000001 - type: precision_at_1 value: 64.333 - type: precision_at_10 value: 10.567 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 29.889 - type: precision_at_5 value: 19.533 - type: recall_at_1 value: 61.260999999999996 - type: recall_at_10 value: 93.167 - type: recall_at_100 value: 99.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 81.667 - type: recall_at_5 value: 87.394 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.71980198019801 - type: cos_sim_ap value: 92.81616007802704 - type: cos_sim_f1 value: 85.17548454688318 - type: cos_sim_precision value: 89.43894389438944 - type: cos_sim_recall value: 81.3 - type: dot_accuracy value: 99.71980198019801 - type: dot_ap value: 92.81398760591358 - type: dot_f1 value: 85.17548454688318 - type: dot_precision value: 89.43894389438944 - type: dot_recall value: 81.3 - type: euclidean_accuracy value: 99.71980198019801 - type: euclidean_ap value: 92.81560637245072 - type: euclidean_f1 value: 85.17548454688318 - type: euclidean_precision value: 89.43894389438944 - type: euclidean_recall value: 81.3 - type: manhattan_accuracy value: 99.73069306930694 - type: manhattan_ap value: 93.14005487480794 - type: manhattan_f1 value: 85.56263269639068 - type: manhattan_precision value: 91.17647058823529 - type: manhattan_recall value: 80.60000000000001 - type: max_accuracy value: 99.73069306930694 - type: max_ap value: 93.14005487480794 - type: max_f1 value: 85.56263269639068 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 79.86443362395185 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 49.40897096662564 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.66040806627947 - type: mrr value: 56.58670475766064 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.51015090598575 - type: cos_sim_spearman value: 31.35016454939226 - type: dot_pearson value: 31.5150068731 - type: dot_spearman value: 31.34790869023487 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.254 - type: map_at_10 value: 2.064 - type: map_at_100 value: 12.909 - type: map_at_1000 value: 31.761 - type: map_at_3 value: 0.738 - type: map_at_5 value: 1.155 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: ndcg_at_1 value: 93.0 - type: ndcg_at_10 value: 82.258 - type: ndcg_at_100 value: 64.34 - type: ndcg_at_1000 value: 57.912 - type: ndcg_at_3 value: 90.827 - type: ndcg_at_5 value: 86.79 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 84.8 - type: precision_at_100 value: 66.0 - type: precision_at_1000 value: 25.356 - type: precision_at_3 value: 94.667 - type: precision_at_5 value: 90.4 - type: recall_at_1 value: 0.254 - type: recall_at_10 value: 2.1950000000000003 - type: recall_at_100 value: 16.088 - type: recall_at_1000 value: 54.559000000000005 - type: recall_at_3 value: 0.75 - type: recall_at_5 value: 1.191 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.976 - type: map_at_10 value: 11.389000000000001 - type: map_at_100 value: 18.429000000000002 - type: map_at_1000 value: 20.113 - type: map_at_3 value: 6.483 - type: map_at_5 value: 8.770999999999999 - type: mrr_at_1 value: 40.816 - type: mrr_at_10 value: 58.118 - type: mrr_at_100 value: 58.489999999999995 - type: mrr_at_1000 value: 58.489999999999995 - type: mrr_at_3 value: 53.061 - type: mrr_at_5 value: 57.041 - type: ndcg_at_1 value: 40.816 - type: ndcg_at_10 value: 30.567 - type: ndcg_at_100 value: 42.44 - type: ndcg_at_1000 value: 53.480000000000004 - type: ndcg_at_3 value: 36.016 - type: ndcg_at_5 value: 34.257 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 25.714 - type: precision_at_100 value: 8.429 - type: precision_at_1000 value: 1.5939999999999999 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 33.878 - type: recall_at_1 value: 2.976 - type: recall_at_10 value: 17.854999999999997 - type: recall_at_100 value: 51.833 - type: recall_at_1000 value: 86.223 - type: recall_at_3 value: 7.887 - type: recall_at_5 value: 12.026 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 85.1174 - type: ap value: 30.169441069345748 - type: f1 value: 69.79254701873245 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 72.58347481607245 - type: f1 value: 72.74877295564937 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 53.90586138221305 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.35769207844072 - type: cos_sim_ap value: 77.9645072410354 - type: cos_sim_f1 value: 71.32352941176471 - type: cos_sim_precision value: 66.5903890160183 - type: cos_sim_recall value: 76.78100263852242 - type: dot_accuracy value: 87.37557370209214 - type: dot_ap value: 77.96250046429908 - type: dot_f1 value: 71.28932757557064 - type: dot_precision value: 66.95249130938586 - type: dot_recall value: 76.22691292875989 - type: euclidean_accuracy value: 87.35173153722357 - type: euclidean_ap value: 77.96520460741593 - type: euclidean_f1 value: 71.32470733210104 - type: euclidean_precision value: 66.91329479768785 - type: euclidean_recall value: 76.35883905013192 - type: manhattan_accuracy value: 87.25636287774931 - type: manhattan_ap value: 77.77752485611796 - type: manhattan_f1 value: 71.18148599269183 - type: manhattan_precision value: 66.10859728506787 - type: manhattan_recall value: 77.0976253298153 - type: max_accuracy value: 87.37557370209214 - type: max_ap value: 77.96520460741593 - type: max_f1 value: 71.32470733210104 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.38176737687739 - type: cos_sim_ap value: 86.58811861657401 - type: cos_sim_f1 value: 79.09430644097604 - type: cos_sim_precision value: 75.45085977911366 - type: cos_sim_recall value: 83.10748383122882 - type: dot_accuracy value: 89.38370784336554 - type: dot_ap value: 86.58840606004333 - type: dot_f1 value: 79.10179860068133 - type: dot_precision value: 75.44546153308643 - type: dot_recall value: 83.13058207576223 - type: euclidean_accuracy value: 89.38564830985369 - type: euclidean_ap value: 86.58820721061164 - type: euclidean_f1 value: 79.09070942235888 - type: euclidean_precision value: 75.38729937194697 - type: euclidean_recall value: 83.17677856482906 - type: manhattan_accuracy value: 89.40699344122326 - type: manhattan_ap value: 86.60631843011362 - type: manhattan_f1 value: 79.14949970570925 - type: manhattan_precision value: 75.78191039729502 - type: manhattan_recall value: 82.83030489682784 - type: max_accuracy value: 89.40699344122326 - type: max_ap value: 86.60631843011362 - type: max_f1 value: 79.14949970570925 - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 65.58442135663871 - type: cos_sim_spearman value: 72.2538631361313 - type: euclidean_pearson value: 70.97255486607429 - type: euclidean_spearman value: 72.25374250228647 - type: manhattan_pearson value: 70.83250199989911 - type: manhattan_spearman value: 72.14819496536272 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 59.99478404929932 - type: cos_sim_spearman value: 62.61836216999812 - type: euclidean_pearson value: 66.86429811933593 - type: euclidean_spearman value: 62.6183520374191 - type: manhattan_pearson value: 66.8063778911633 - type: manhattan_spearman value: 62.569607573241115 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 53.98400000000001 - type: f1 value: 51.21447361350723 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 79.11941660686553 - type: cos_sim_spearman value: 81.25029594540435 - type: euclidean_pearson value: 82.06973504238826 - type: euclidean_spearman value: 81.2501989488524 - type: manhattan_pearson value: 82.10094630392753 - type: manhattan_spearman value: 81.27987244392389 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 47.07270168705156 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 45.98511703185043 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 88.19895157194931 - type: mrr value: 90.21424603174603 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 88.03317320980119 - type: mrr value: 89.9461507936508 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 29.037000000000003 - type: map_at_10 value: 42.001 - type: map_at_100 value: 43.773 - type: map_at_1000 value: 43.878 - type: map_at_3 value: 37.637 - type: map_at_5 value: 40.034 - type: mrr_at_1 value: 43.136 - type: mrr_at_10 value: 51.158 - type: mrr_at_100 value: 52.083 - type: mrr_at_1000 value: 52.12 - type: mrr_at_3 value: 48.733 - type: mrr_at_5 value: 50.025 - type: ndcg_at_1 value: 43.136 - type: ndcg_at_10 value: 48.685 - type: ndcg_at_100 value: 55.513 - type: ndcg_at_1000 value: 57.242000000000004 - type: ndcg_at_3 value: 43.329 - type: ndcg_at_5 value: 45.438 - type: precision_at_1 value: 43.136 - type: precision_at_10 value: 10.56 - type: precision_at_100 value: 1.6129999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 24.064 - type: precision_at_5 value: 17.269000000000002 - type: recall_at_1 value: 29.037000000000003 - type: recall_at_10 value: 59.245000000000005 - type: recall_at_100 value: 87.355 - type: recall_at_1000 value: 98.74000000000001 - type: recall_at_3 value: 42.99 - type: recall_at_5 value: 49.681999999999995 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 82.68190018039687 - type: cos_sim_ap value: 90.18017125327886 - type: cos_sim_f1 value: 83.64080906868193 - type: cos_sim_precision value: 79.7076890489303 - type: cos_sim_recall value: 87.98223053542202 - type: dot_accuracy value: 82.68190018039687 - type: dot_ap value: 90.18782350103646 - type: dot_f1 value: 83.64242087729039 - type: dot_precision value: 79.65313028764805 - type: dot_recall value: 88.05237315875614 - type: euclidean_accuracy value: 82.68190018039687 - type: euclidean_ap value: 90.1801957900632 - type: euclidean_f1 value: 83.63636363636364 - type: euclidean_precision value: 79.52772506852203 - type: euclidean_recall value: 88.19265840542437 - type: manhattan_accuracy value: 82.14070956103427 - type: manhattan_ap value: 89.96178420101427 - type: manhattan_f1 value: 83.21087838578791 - type: manhattan_precision value: 78.35605121850475 - type: manhattan_recall value: 88.70703764320785 - type: max_accuracy value: 82.68190018039687 - type: max_ap value: 90.18782350103646 - type: max_f1 value: 83.64242087729039 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 72.234 - type: map_at_10 value: 80.10000000000001 - type: map_at_100 value: 80.36 - type: map_at_1000 value: 80.363 - type: map_at_3 value: 78.315 - type: map_at_5 value: 79.607 - type: mrr_at_1 value: 72.392 - type: mrr_at_10 value: 80.117 - type: mrr_at_100 value: 80.36999999999999 - type: mrr_at_1000 value: 80.373 - type: mrr_at_3 value: 78.469 - type: mrr_at_5 value: 79.633 - type: ndcg_at_1 value: 72.392 - type: ndcg_at_10 value: 83.651 - type: ndcg_at_100 value: 84.749 - type: ndcg_at_1000 value: 84.83000000000001 - type: ndcg_at_3 value: 80.253 - type: ndcg_at_5 value: 82.485 - type: precision_at_1 value: 72.392 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.004 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 28.732000000000003 - type: precision_at_5 value: 18.377 - type: recall_at_1 value: 72.234 - type: recall_at_10 value: 94.573 - type: recall_at_100 value: 99.368 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 85.669 - type: recall_at_5 value: 91.01700000000001 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 26.173999999999996 - type: map_at_10 value: 80.04 - type: map_at_100 value: 82.94500000000001 - type: map_at_1000 value: 82.98100000000001 - type: map_at_3 value: 55.562999999999995 - type: map_at_5 value: 69.89800000000001 - type: mrr_at_1 value: 89.5 - type: mrr_at_10 value: 92.996 - type: mrr_at_100 value: 93.06400000000001 - type: mrr_at_1000 value: 93.065 - type: mrr_at_3 value: 92.658 - type: mrr_at_5 value: 92.84599999999999 - type: ndcg_at_1 value: 89.5 - type: ndcg_at_10 value: 87.443 - type: ndcg_at_100 value: 90.253 - type: ndcg_at_1000 value: 90.549 - type: ndcg_at_3 value: 85.874 - type: ndcg_at_5 value: 84.842 - type: precision_at_1 value: 89.5 - type: precision_at_10 value: 41.805 - type: precision_at_100 value: 4.827 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 76.85 - type: precision_at_5 value: 64.8 - type: recall_at_1 value: 26.173999999999996 - type: recall_at_10 value: 89.101 - type: recall_at_100 value: 98.08099999999999 - type: recall_at_1000 value: 99.529 - type: recall_at_3 value: 57.902 - type: recall_at_5 value: 74.602 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 56.10000000000001 - type: map_at_10 value: 66.15299999999999 - type: map_at_100 value: 66.625 - type: map_at_1000 value: 66.636 - type: map_at_3 value: 63.632999999999996 - type: map_at_5 value: 65.293 - type: mrr_at_1 value: 56.10000000000001 - type: mrr_at_10 value: 66.15299999999999 - type: mrr_at_100 value: 66.625 - type: mrr_at_1000 value: 66.636 - type: mrr_at_3 value: 63.632999999999996 - type: mrr_at_5 value: 65.293 - type: ndcg_at_1 value: 56.10000000000001 - type: ndcg_at_10 value: 71.146 - type: ndcg_at_100 value: 73.27799999999999 - type: ndcg_at_1000 value: 73.529 - type: ndcg_at_3 value: 66.09 - type: ndcg_at_5 value: 69.08999999999999 - type: precision_at_1 value: 56.10000000000001 - type: precision_at_10 value: 8.68 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 24.4 - type: precision_at_5 value: 16.1 - type: recall_at_1 value: 56.10000000000001 - type: recall_at_10 value: 86.8 - type: recall_at_100 value: 96.39999999999999 - type: recall_at_1000 value: 98.3 - type: recall_at_3 value: 73.2 - type: recall_at_5 value: 80.5 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 54.52096960369373 - type: f1 value: 40.930845295808695 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 86.51031894934334 - type: ap value: 55.9516014323483 - type: f1 value: 81.54813679326381 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 69.67437838574276 - type: cos_sim_spearman value: 73.81314174653045 - type: euclidean_pearson value: 72.63430276680275 - type: euclidean_spearman value: 73.81358736777001 - type: manhattan_pearson value: 72.58743833842829 - type: manhattan_spearman value: 73.7590419009179 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: None metrics: - type: map value: 31.648613483640254 - type: mrr value: 30.37420634920635 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 73.28099999999999 - type: map_at_10 value: 81.977 - type: map_at_100 value: 82.222 - type: map_at_1000 value: 82.22699999999999 - type: map_at_3 value: 80.441 - type: map_at_5 value: 81.46600000000001 - type: mrr_at_1 value: 75.673 - type: mrr_at_10 value: 82.41000000000001 - type: mrr_at_100 value: 82.616 - type: mrr_at_1000 value: 82.621 - type: mrr_at_3 value: 81.094 - type: mrr_at_5 value: 81.962 - type: ndcg_at_1 value: 75.673 - type: ndcg_at_10 value: 85.15599999999999 - type: ndcg_at_100 value: 86.151 - type: ndcg_at_1000 value: 86.26899999999999 - type: ndcg_at_3 value: 82.304 - type: ndcg_at_5 value: 84.009 - type: precision_at_1 value: 75.673 - type: precision_at_10 value: 10.042 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 30.673000000000002 - type: precision_at_5 value: 19.326999999999998 - type: recall_at_1 value: 73.28099999999999 - type: recall_at_10 value: 94.446 - type: recall_at_100 value: 98.737 - type: recall_at_1000 value: 99.649 - type: recall_at_3 value: 86.984 - type: recall_at_5 value: 91.024 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.08607935440484 - type: f1 value: 78.24879986066307 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.05917955615332 - type: f1 value: 85.05279279434997 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 56.2 - type: map_at_10 value: 62.57899999999999 - type: map_at_100 value: 63.154999999999994 - type: map_at_1000 value: 63.193 - type: map_at_3 value: 61.217 - type: map_at_5 value: 62.012 - type: mrr_at_1 value: 56.3 - type: mrr_at_10 value: 62.629000000000005 - type: mrr_at_100 value: 63.205999999999996 - type: mrr_at_1000 value: 63.244 - type: mrr_at_3 value: 61.267 - type: mrr_at_5 value: 62.062 - type: ndcg_at_1 value: 56.2 - type: ndcg_at_10 value: 65.592 - type: ndcg_at_100 value: 68.657 - type: ndcg_at_1000 value: 69.671 - type: ndcg_at_3 value: 62.808 - type: ndcg_at_5 value: 64.24499999999999 - type: precision_at_1 value: 56.2 - type: precision_at_10 value: 7.5 - type: precision_at_100 value: 0.899 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 22.467000000000002 - type: precision_at_5 value: 14.180000000000001 - type: recall_at_1 value: 56.2 - type: recall_at_10 value: 75.0 - type: recall_at_100 value: 89.9 - type: recall_at_1000 value: 97.89999999999999 - type: recall_at_3 value: 67.4 - type: recall_at_5 value: 70.89999999999999 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 76.87666666666667 - type: f1 value: 76.7317686219665 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 79.64266377910124 - type: cos_sim_ap value: 84.78274442344829 - type: cos_sim_f1 value: 81.16947472745292 - type: cos_sim_precision value: 76.47058823529412 - type: cos_sim_recall value: 86.48363252375924 - type: dot_accuracy value: 79.64266377910124 - type: dot_ap value: 84.7851404063692 - type: dot_f1 value: 81.16947472745292 - type: dot_precision value: 76.47058823529412 - type: dot_recall value: 86.48363252375924 - type: euclidean_accuracy value: 79.64266377910124 - type: euclidean_ap value: 84.78068373762378 - type: euclidean_f1 value: 81.14794656110837 - type: euclidean_precision value: 76.35009310986965 - type: euclidean_recall value: 86.58922914466737 - type: manhattan_accuracy value: 79.48023822414727 - type: manhattan_ap value: 84.72928897427576 - type: manhattan_f1 value: 81.32084770823064 - type: manhattan_precision value: 76.24768946395564 - type: manhattan_recall value: 87.11721224920802 - type: max_accuracy value: 79.64266377910124 - type: max_ap value: 84.7851404063692 - type: max_f1 value: 81.32084770823064 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 94.3 - type: ap value: 92.8664032274438 - type: f1 value: 94.29311102997727 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 48.51392279882909 - type: cos_sim_spearman value: 54.06338895994974 - type: euclidean_pearson value: 52.58480559573412 - type: euclidean_spearman value: 54.06417276612201 - type: manhattan_pearson value: 52.69525121721343 - type: manhattan_spearman value: 54.048147455389675 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 29.728387290757325 - type: cos_sim_spearman value: 31.366121633635284 - type: euclidean_pearson value: 29.14588368552961 - type: euclidean_spearman value: 31.36764411112844 - type: manhattan_pearson value: 29.63517350523121 - type: manhattan_spearman value: 31.94157020583762 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 63.64868296271406 - type: cos_sim_spearman value: 66.12800618164744 - type: euclidean_pearson value: 63.21405767340238 - type: euclidean_spearman value: 66.12786567790748 - type: manhattan_pearson value: 64.04300276525848 - type: manhattan_spearman value: 66.5066857145652 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 81.2302623912794 - type: cos_sim_spearman value: 81.16833673266562 - type: euclidean_pearson value: 79.47647843876024 - type: euclidean_spearman value: 81.16944349524972 - type: manhattan_pearson value: 79.84947238492208 - type: manhattan_spearman value: 81.64626599410026 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 67.80129586475687 - type: mrr value: 77.77402311635554 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 28.666999999999998 - type: map_at_10 value: 81.063 - type: map_at_100 value: 84.504 - type: map_at_1000 value: 84.552 - type: map_at_3 value: 56.897 - type: map_at_5 value: 70.073 - type: mrr_at_1 value: 92.087 - type: mrr_at_10 value: 94.132 - type: mrr_at_100 value: 94.19800000000001 - type: mrr_at_1000 value: 94.19999999999999 - type: mrr_at_3 value: 93.78999999999999 - type: mrr_at_5 value: 94.002 - type: ndcg_at_1 value: 92.087 - type: ndcg_at_10 value: 87.734 - type: ndcg_at_100 value: 90.736 - type: ndcg_at_1000 value: 91.184 - type: ndcg_at_3 value: 88.78 - type: ndcg_at_5 value: 87.676 - type: precision_at_1 value: 92.087 - type: precision_at_10 value: 43.46 - type: precision_at_100 value: 5.07 - type: precision_at_1000 value: 0.518 - type: precision_at_3 value: 77.49000000000001 - type: precision_at_5 value: 65.194 - type: recall_at_1 value: 28.666999999999998 - type: recall_at_10 value: 86.632 - type: recall_at_100 value: 96.646 - type: recall_at_1000 value: 98.917 - type: recall_at_3 value: 58.333999999999996 - type: recall_at_5 value: 72.974 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 52.971999999999994 - type: f1 value: 50.2898280984929 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 86.0797948663824 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 85.10759092255017 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 65.60000000000001 - type: map_at_10 value: 74.773 - type: map_at_100 value: 75.128 - type: map_at_1000 value: 75.136 - type: map_at_3 value: 73.05 - type: map_at_5 value: 74.13499999999999 - type: mrr_at_1 value: 65.60000000000001 - type: mrr_at_10 value: 74.773 - type: mrr_at_100 value: 75.128 - type: mrr_at_1000 value: 75.136 - type: mrr_at_3 value: 73.05 - type: mrr_at_5 value: 74.13499999999999 - type: ndcg_at_1 value: 65.60000000000001 - type: ndcg_at_10 value: 78.84299999999999 - type: ndcg_at_100 value: 80.40899999999999 - type: ndcg_at_1000 value: 80.57 - type: ndcg_at_3 value: 75.40599999999999 - type: ndcg_at_5 value: 77.351 - type: precision_at_1 value: 65.60000000000001 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 27.400000000000002 - type: precision_at_5 value: 17.380000000000003 - type: recall_at_1 value: 65.60000000000001 - type: recall_at_10 value: 91.4 - type: recall_at_100 value: 98.4 - type: recall_at_1000 value: 99.6 - type: recall_at_3 value: 82.19999999999999 - type: recall_at_5 value: 86.9 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 89.47 - type: ap value: 75.59561751845389 - type: f1 value: 87.95207751382563 - task: type: Clustering dataset: name: MTEB AlloProfClusteringP2P type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 76.05592323841036 - type: v_measure value: 64.51718058866508 - task: type: Reranking dataset: name: MTEB AlloprofReranking type: lyon-nlp/mteb-fr-reranking-alloprof-s2p config: default split: test revision: 666fdacebe0291776e86f29345663dfaf80a0db9 metrics: - type: map value: 73.08278490943373 - type: mrr value: 74.66561454570449 - task: type: Retrieval dataset: name: MTEB AlloprofRetrieval type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: map_at_1 value: 38.912 - type: map_at_10 value: 52.437999999999995 - type: map_at_100 value: 53.38 - type: map_at_1000 value: 53.427 - type: map_at_3 value: 48.879 - type: map_at_5 value: 50.934000000000005 - type: mrr_at_1 value: 44.085 - type: mrr_at_10 value: 55.337 - type: mrr_at_100 value: 56.016999999999996 - type: mrr_at_1000 value: 56.043 - type: mrr_at_3 value: 52.55499999999999 - type: mrr_at_5 value: 54.20399999999999 - type: ndcg_at_1 value: 44.085 - type: ndcg_at_10 value: 58.876 - type: ndcg_at_100 value: 62.714000000000006 - type: ndcg_at_1000 value: 63.721000000000004 - type: ndcg_at_3 value: 52.444 - type: ndcg_at_5 value: 55.692 - type: precision_at_1 value: 44.085 - type: precision_at_10 value: 9.21 - type: precision_at_100 value: 1.164 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 23.043 - type: precision_at_5 value: 15.898000000000001 - type: recall_at_1 value: 38.912 - type: recall_at_10 value: 75.577 - type: recall_at_100 value: 92.038 - type: recall_at_1000 value: 99.325 - type: recall_at_3 value: 58.592 - type: recall_at_5 value: 66.235 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 55.532000000000004 - type: f1 value: 52.5783943471605 - task: type: Retrieval dataset: name: MTEB BSARDRetrieval type: maastrichtlawtech/bsard config: default split: test revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 metrics: - type: map_at_1 value: 8.108 - type: map_at_10 value: 14.710999999999999 - type: map_at_100 value: 15.891 - type: map_at_1000 value: 15.983 - type: map_at_3 value: 12.237 - type: map_at_5 value: 13.679 - type: mrr_at_1 value: 8.108 - type: mrr_at_10 value: 14.710999999999999 - type: mrr_at_100 value: 15.891 - type: mrr_at_1000 value: 15.983 - type: mrr_at_3 value: 12.237 - type: mrr_at_5 value: 13.679 - type: ndcg_at_1 value: 8.108 - type: ndcg_at_10 value: 18.796 - type: ndcg_at_100 value: 25.098 - type: ndcg_at_1000 value: 27.951999999999998 - type: ndcg_at_3 value: 13.712 - type: ndcg_at_5 value: 16.309 - type: precision_at_1 value: 8.108 - type: precision_at_10 value: 3.198 - type: precision_at_100 value: 0.626 - type: precision_at_1000 value: 0.086 - type: precision_at_3 value: 6.006 - type: precision_at_5 value: 4.865 - type: recall_at_1 value: 8.108 - type: recall_at_10 value: 31.982 - type: recall_at_100 value: 62.613 - type: recall_at_1000 value: 86.036 - type: recall_at_3 value: 18.018 - type: recall_at_5 value: 24.324 - task: type: Clustering dataset: name: MTEB HALClusteringS2S type: lyon-nlp/clustering-hal-s2s config: default split: test revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 metrics: - type: v_measure value: 30.833269778867116 - task: type: Clustering dataset: name: MTEB MLSUMClusteringP2P type: mlsum config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 50.0281928004713 - type: v_measure value: 43.699961510636534 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.68963357344191 - type: f1 value: 96.45175170820961 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 87.46946445349202 - type: f1 value: 65.79860440988624 - task: type: Classification dataset: name: MTEB MasakhaNEWSClassification (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 82.60663507109005 - type: f1 value: 77.20462646604777 - task: type: Clustering dataset: name: MTEB MasakhaNEWSClusteringP2P (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 60.19311264967803 - type: v_measure value: 63.6235764409785 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.65097511768661 - type: f1 value: 78.77796091490924 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.64425016812373 - type: f1 value: 85.4912728670017 - task: type: Retrieval dataset: name: MTEB MintakaRetrieval (fr) type: jinaai/mintakaqa config: fr split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: map_at_1 value: 35.913000000000004 - type: map_at_10 value: 48.147 - type: map_at_100 value: 48.91 - type: map_at_1000 value: 48.949 - type: map_at_3 value: 45.269999999999996 - type: map_at_5 value: 47.115 - type: mrr_at_1 value: 35.913000000000004 - type: mrr_at_10 value: 48.147 - type: mrr_at_100 value: 48.91 - type: mrr_at_1000 value: 48.949 - type: mrr_at_3 value: 45.269999999999996 - type: mrr_at_5 value: 47.115 - type: ndcg_at_1 value: 35.913000000000004 - type: ndcg_at_10 value: 54.03 - type: ndcg_at_100 value: 57.839 - type: ndcg_at_1000 value: 58.925000000000004 - type: ndcg_at_3 value: 48.217999999999996 - type: ndcg_at_5 value: 51.56699999999999 - type: precision_at_1 value: 35.913000000000004 - type: precision_at_10 value: 7.244000000000001 - type: precision_at_100 value: 0.9039999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 18.905 - type: precision_at_5 value: 12.981000000000002 - type: recall_at_1 value: 35.913000000000004 - type: recall_at_10 value: 72.441 - type: recall_at_100 value: 90.41799999999999 - type: recall_at_1000 value: 99.099 - type: recall_at_3 value: 56.716 - type: recall_at_5 value: 64.90599999999999 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (fr) type: GEM/opusparcus config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 99.90069513406156 - type: cos_sim_ap value: 100.0 - type: cos_sim_f1 value: 99.95032290114257 - type: cos_sim_precision value: 100.0 - type: cos_sim_recall value: 99.90069513406156 - type: dot_accuracy value: 99.90069513406156 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95032290114257 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90069513406156 - type: euclidean_accuracy value: 99.90069513406156 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95032290114257 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90069513406156 - type: manhattan_accuracy value: 99.90069513406156 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95032290114257 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90069513406156 - type: max_accuracy value: 99.90069513406156 - type: max_ap value: 100.0 - type: max_f1 value: 99.95032290114257 - task: type: PairClassification dataset: name: MTEB PawsX (fr) type: paws-x config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 75.25 - type: cos_sim_ap value: 80.86376001270014 - type: cos_sim_f1 value: 73.65945437441204 - type: cos_sim_precision value: 64.02289452166802 - type: cos_sim_recall value: 86.71096345514951 - type: dot_accuracy value: 75.25 - type: dot_ap value: 80.93686107633002 - type: dot_f1 value: 73.65945437441204 - type: dot_precision value: 64.02289452166802 - type: dot_recall value: 86.71096345514951 - type: euclidean_accuracy value: 75.25 - type: euclidean_ap value: 80.86379136218862 - type: euclidean_f1 value: 73.65945437441204 - type: euclidean_precision value: 64.02289452166802 - type: euclidean_recall value: 86.71096345514951 - type: manhattan_accuracy value: 75.3 - type: manhattan_ap value: 80.87826606097734 - type: manhattan_f1 value: 73.68421052631581 - type: manhattan_precision value: 64.0 - type: manhattan_recall value: 86.82170542635659 - type: max_accuracy value: 75.3 - type: max_ap value: 80.93686107633002 - type: max_f1 value: 73.68421052631581 - task: type: STS dataset: name: MTEB SICKFr type: Lajavaness/SICK-fr config: default split: test revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a metrics: - type: cos_sim_pearson value: 81.42349425981143 - type: cos_sim_spearman value: 78.90454327031226 - type: euclidean_pearson value: 78.39086497435166 - type: euclidean_spearman value: 78.9046133980509 - type: manhattan_pearson value: 78.63743094286502 - type: manhattan_spearman value: 79.12136348449269 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 81.452697919749 - type: cos_sim_spearman value: 82.58116836039301 - type: euclidean_pearson value: 81.04038478932786 - type: euclidean_spearman value: 82.58116836039301 - type: manhattan_pearson value: 81.37075396187771 - type: manhattan_spearman value: 82.73678231355368 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (fr) type: stsb_multi_mt config: fr split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 85.7419764013806 - type: cos_sim_spearman value: 85.46085808849622 - type: euclidean_pearson value: 83.70449639870063 - type: euclidean_spearman value: 85.46159013076233 - type: manhattan_pearson value: 83.95259510313929 - type: manhattan_spearman value: 85.8029724659458 - task: type: Summarization dataset: name: MTEB SummEvalFr type: lyon-nlp/summarization-summeval-fr-p2p config: default split: test revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 metrics: - type: cos_sim_pearson value: 32.61063271753325 - type: cos_sim_spearman value: 31.454589417353603 - type: dot_pearson value: 32.6106288643431 - type: dot_spearman value: 31.454589417353603 - task: type: Reranking dataset: name: MTEB SyntecReranking type: lyon-nlp/mteb-fr-reranking-syntec-s2p config: default split: test revision: b205c5084a0934ce8af14338bf03feb19499c84d metrics: - type: map value: 84.31666666666666 - type: mrr value: 84.31666666666666 - task: type: Retrieval dataset: name: MTEB SyntecRetrieval type: lyon-nlp/mteb-fr-retrieval-syntec-s2p config: default split: test revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff metrics: - type: map_at_1 value: 63.0 - type: map_at_10 value: 73.471 - type: map_at_100 value: 73.87 - type: map_at_1000 value: 73.87 - type: map_at_3 value: 70.5 - type: map_at_5 value: 73.05 - type: mrr_at_1 value: 63.0 - type: mrr_at_10 value: 73.471 - type: mrr_at_100 value: 73.87 - type: mrr_at_1000 value: 73.87 - type: mrr_at_3 value: 70.5 - type: mrr_at_5 value: 73.05 - type: ndcg_at_1 value: 63.0 - type: ndcg_at_10 value: 78.255 - type: ndcg_at_100 value: 79.88 - type: ndcg_at_1000 value: 79.88 - type: ndcg_at_3 value: 72.702 - type: ndcg_at_5 value: 77.264 - type: precision_at_1 value: 63.0 - type: precision_at_10 value: 9.3 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 26.333000000000002 - type: precision_at_5 value: 18.0 - type: recall_at_1 value: 63.0 - type: recall_at_10 value: 93.0 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 79.0 - type: recall_at_5 value: 90.0 - task: type: Retrieval dataset: name: MTEB XPQARetrieval (fr) type: jinaai/xpqa config: fr split: test revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f metrics: - type: map_at_1 value: 40.338 - type: map_at_10 value: 61.927 - type: map_at_100 value: 63.361999999999995 - type: map_at_1000 value: 63.405 - type: map_at_3 value: 55.479 - type: map_at_5 value: 59.732 - type: mrr_at_1 value: 63.551 - type: mrr_at_10 value: 71.006 - type: mrr_at_100 value: 71.501 - type: mrr_at_1000 value: 71.509 - type: mrr_at_3 value: 69.07 - type: mrr_at_5 value: 70.165 - type: ndcg_at_1 value: 63.551 - type: ndcg_at_10 value: 68.297 - type: ndcg_at_100 value: 73.13199999999999 - type: ndcg_at_1000 value: 73.751 - type: ndcg_at_3 value: 62.999 - type: ndcg_at_5 value: 64.89 - type: precision_at_1 value: 63.551 - type: precision_at_10 value: 15.661 - type: precision_at_100 value: 1.9789999999999999 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 38.273 - type: precision_at_5 value: 27.61 - type: recall_at_1 value: 40.338 - type: recall_at_10 value: 77.267 - type: recall_at_100 value: 95.892 - type: recall_at_1000 value: 99.75500000000001 - type: recall_at_3 value: 60.36 - type: recall_at_5 value: 68.825 - task: type: Clustering dataset: name: MTEB 8TagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: None metrics: - type: v_measure value: 51.36126303874126 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: None metrics: - type: accuracy value: 67.13717693836979 - type: f1 value: 57.27609848003782 - task: type: Retrieval dataset: name: MTEB ArguAna-PL type: clarin-knext/arguana-pl config: default split: test revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 metrics: - type: map_at_1 value: 35.276999999999994 - type: map_at_10 value: 51.086 - type: map_at_100 value: 51.788000000000004 - type: map_at_1000 value: 51.791 - type: map_at_3 value: 46.147 - type: map_at_5 value: 49.078 - type: mrr_at_1 value: 35.917 - type: mrr_at_10 value: 51.315999999999995 - type: mrr_at_100 value: 52.018 - type: mrr_at_1000 value: 52.022 - type: mrr_at_3 value: 46.349000000000004 - type: mrr_at_5 value: 49.297000000000004 - type: ndcg_at_1 value: 35.276999999999994 - type: ndcg_at_10 value: 59.870999999999995 - type: ndcg_at_100 value: 62.590999999999994 - type: ndcg_at_1000 value: 62.661 - type: ndcg_at_3 value: 49.745 - type: ndcg_at_5 value: 55.067 - type: precision_at_1 value: 35.276999999999994 - type: precision_at_10 value: 8.791 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.637 - type: recall_at_1 value: 35.276999999999994 - type: recall_at_10 value: 87.909 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.18599999999999 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: None metrics: - type: accuracy value: 78.03000000000002 - type: ap value: 29.12548553897622 - type: f1 value: 66.54857118886073 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 89.0 - type: cos_sim_ap value: 76.75437826834582 - type: cos_sim_f1 value: 66.4850136239782 - type: cos_sim_precision value: 68.92655367231639 - type: cos_sim_recall value: 64.21052631578948 - type: dot_accuracy value: 89.0 - type: dot_ap value: 76.75437826834582 - type: dot_f1 value: 66.4850136239782 - type: dot_precision value: 68.92655367231639 - type: dot_recall value: 64.21052631578948 - type: euclidean_accuracy value: 89.0 - type: euclidean_ap value: 76.75437826834582 - type: euclidean_f1 value: 66.4850136239782 - type: euclidean_precision value: 68.92655367231639 - type: euclidean_recall value: 64.21052631578948 - type: manhattan_accuracy value: 89.0 - type: manhattan_ap value: 76.66074220647083 - type: manhattan_f1 value: 66.47058823529412 - type: manhattan_precision value: 75.33333333333333 - type: manhattan_recall value: 59.473684210526315 - type: max_accuracy value: 89.0 - type: max_ap value: 76.75437826834582 - type: max_f1 value: 66.4850136239782 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 93.12903172428328 - type: cos_sim_spearman value: 92.66381487060741 - type: euclidean_pearson value: 90.37278396708922 - type: euclidean_spearman value: 92.66381487060741 - type: manhattan_pearson value: 90.32503296540962 - type: manhattan_spearman value: 92.6902938354313 - task: type: Retrieval dataset: name: MTEB DBPedia-PL type: clarin-knext/dbpedia-pl config: default split: test revision: 76afe41d9af165cc40999fcaa92312b8b012064a metrics: - type: map_at_1 value: 8.83 - type: map_at_10 value: 18.326 - type: map_at_100 value: 26.496 - type: map_at_1000 value: 28.455000000000002 - type: map_at_3 value: 12.933 - type: map_at_5 value: 15.168000000000001 - type: mrr_at_1 value: 66.0 - type: mrr_at_10 value: 72.76700000000001 - type: mrr_at_100 value: 73.203 - type: mrr_at_1000 value: 73.219 - type: mrr_at_3 value: 71.458 - type: mrr_at_5 value: 72.246 - type: ndcg_at_1 value: 55.375 - type: ndcg_at_10 value: 41.3 - type: ndcg_at_100 value: 45.891 - type: ndcg_at_1000 value: 52.905 - type: ndcg_at_3 value: 46.472 - type: ndcg_at_5 value: 43.734 - type: precision_at_1 value: 66.0 - type: precision_at_10 value: 33.074999999999996 - type: precision_at_100 value: 11.094999999999999 - type: precision_at_1000 value: 2.374 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.0 - type: recall_at_1 value: 8.83 - type: recall_at_10 value: 22.587 - type: recall_at_100 value: 50.61600000000001 - type: recall_at_1000 value: 73.559 - type: recall_at_3 value: 13.688 - type: recall_at_5 value: 16.855 - task: type: Retrieval dataset: name: MTEB FiQA-PL type: clarin-knext/fiqa-pl config: default split: test revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e metrics: - type: map_at_1 value: 20.587 - type: map_at_10 value: 33.095 - type: map_at_100 value: 35.24 - type: map_at_1000 value: 35.429 - type: map_at_3 value: 28.626 - type: map_at_5 value: 31.136999999999997 - type: mrr_at_1 value: 40.586 - type: mrr_at_10 value: 49.033 - type: mrr_at_100 value: 49.952999999999996 - type: mrr_at_1000 value: 49.992 - type: mrr_at_3 value: 46.553 - type: mrr_at_5 value: 48.035 - type: ndcg_at_1 value: 40.586 - type: ndcg_at_10 value: 41.046 - type: ndcg_at_100 value: 48.586 - type: ndcg_at_1000 value: 51.634 - type: ndcg_at_3 value: 36.773 - type: ndcg_at_5 value: 38.389 - type: precision_at_1 value: 40.586 - type: precision_at_10 value: 11.466 - type: precision_at_100 value: 1.909 - type: precision_at_1000 value: 0.245 - type: precision_at_3 value: 24.434 - type: precision_at_5 value: 18.426000000000002 - type: recall_at_1 value: 20.587 - type: recall_at_10 value: 47.986000000000004 - type: recall_at_100 value: 75.761 - type: recall_at_1000 value: 94.065 - type: recall_at_3 value: 33.339 - type: recall_at_5 value: 39.765 - task: type: Retrieval dataset: name: MTEB HotpotQA-PL type: clarin-knext/hotpotqa-pl config: default split: test revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907 metrics: - type: map_at_1 value: 40.878 - type: map_at_10 value: 58.775999999999996 - type: map_at_100 value: 59.632 - type: map_at_1000 value: 59.707 - type: map_at_3 value: 56.074 - type: map_at_5 value: 57.629 - type: mrr_at_1 value: 81.756 - type: mrr_at_10 value: 86.117 - type: mrr_at_100 value: 86.299 - type: mrr_at_1000 value: 86.30600000000001 - type: mrr_at_3 value: 85.345 - type: mrr_at_5 value: 85.832 - type: ndcg_at_1 value: 81.756 - type: ndcg_at_10 value: 67.608 - type: ndcg_at_100 value: 70.575 - type: ndcg_at_1000 value: 71.99600000000001 - type: ndcg_at_3 value: 63.723 - type: ndcg_at_5 value: 65.70700000000001 - type: precision_at_1 value: 81.756 - type: precision_at_10 value: 13.619 - type: precision_at_100 value: 1.5939999999999999 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 39.604 - type: precision_at_5 value: 25.332 - type: recall_at_1 value: 40.878 - type: recall_at_10 value: 68.096 - type: recall_at_100 value: 79.696 - type: recall_at_1000 value: 89.082 - type: recall_at_3 value: 59.406000000000006 - type: recall_at_5 value: 63.329 - task: type: Retrieval dataset: name: MTEB MSMARCO-PL type: clarin-knext/msmarco-pl config: default split: test revision: 8634c07806d5cce3a6138e260e59b81760a0a640 metrics: - type: map_at_1 value: 2.1839999999999997 - type: map_at_10 value: 11.346 - type: map_at_100 value: 30.325000000000003 - type: map_at_1000 value: 37.806 - type: map_at_3 value: 4.842 - type: map_at_5 value: 6.891 - type: mrr_at_1 value: 86.047 - type: mrr_at_10 value: 89.14699999999999 - type: mrr_at_100 value: 89.46600000000001 - type: mrr_at_1000 value: 89.46600000000001 - type: mrr_at_3 value: 89.14699999999999 - type: mrr_at_5 value: 89.14699999999999 - type: ndcg_at_1 value: 67.829 - type: ndcg_at_10 value: 62.222 - type: ndcg_at_100 value: 55.337 - type: ndcg_at_1000 value: 64.076 - type: ndcg_at_3 value: 68.12700000000001 - type: ndcg_at_5 value: 64.987 - type: precision_at_1 value: 86.047 - type: precision_at_10 value: 69.535 - type: precision_at_100 value: 32.93 - type: precision_at_1000 value: 6.6049999999999995 - type: precision_at_3 value: 79.845 - type: precision_at_5 value: 75.349 - type: recall_at_1 value: 2.1839999999999997 - type: recall_at_10 value: 12.866 - type: recall_at_100 value: 43.505 - type: recall_at_1000 value: 72.366 - type: recall_at_3 value: 4.947 - type: recall_at_5 value: 7.192 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.75319435104238 - type: f1 value: 77.58961444860606 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 85.54472091459313 - type: f1 value: 84.29498563572106 - task: type: Retrieval dataset: name: MTEB NFCorpus-PL type: clarin-knext/nfcorpus-pl config: default split: test revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 metrics: - type: map_at_1 value: 4.367 - type: map_at_10 value: 10.38 - type: map_at_100 value: 13.516 - type: map_at_1000 value: 14.982000000000001 - type: map_at_3 value: 7.367 - type: map_at_5 value: 8.59 - type: mrr_at_1 value: 41.486000000000004 - type: mrr_at_10 value: 48.886 - type: mrr_at_100 value: 49.657000000000004 - type: mrr_at_1000 value: 49.713 - type: mrr_at_3 value: 46.904 - type: mrr_at_5 value: 48.065000000000005 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 30.885 - type: ndcg_at_100 value: 28.393 - type: ndcg_at_1000 value: 37.428 - type: ndcg_at_3 value: 35.394999999999996 - type: ndcg_at_5 value: 33.391999999999996 - type: precision_at_1 value: 41.486000000000004 - type: precision_at_10 value: 23.437 - type: precision_at_100 value: 7.638 - type: precision_at_1000 value: 2.0389999999999997 - type: precision_at_3 value: 32.817 - type: precision_at_5 value: 28.915999999999997 - type: recall_at_1 value: 4.367 - type: recall_at_10 value: 14.655000000000001 - type: recall_at_100 value: 29.665999999999997 - type: recall_at_1000 value: 62.073 - type: recall_at_3 value: 8.51 - type: recall_at_5 value: 10.689 - task: type: Retrieval dataset: name: MTEB NQ-PL type: clarin-knext/nq-pl config: default split: test revision: f171245712cf85dd4700b06bef18001578d0ca8d metrics: - type: map_at_1 value: 28.616000000000003 - type: map_at_10 value: 41.626000000000005 - type: map_at_100 value: 42.689 - type: map_at_1000 value: 42.733 - type: map_at_3 value: 37.729 - type: map_at_5 value: 39.879999999999995 - type: mrr_at_1 value: 32.068000000000005 - type: mrr_at_10 value: 44.029 - type: mrr_at_100 value: 44.87 - type: mrr_at_1000 value: 44.901 - type: mrr_at_3 value: 40.687 - type: mrr_at_5 value: 42.625 - type: ndcg_at_1 value: 32.068000000000005 - type: ndcg_at_10 value: 48.449999999999996 - type: ndcg_at_100 value: 53.13 - type: ndcg_at_1000 value: 54.186 - type: ndcg_at_3 value: 40.983999999999995 - type: ndcg_at_5 value: 44.628 - type: precision_at_1 value: 32.068000000000005 - type: precision_at_10 value: 7.9750000000000005 - type: precision_at_100 value: 1.061 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 18.404999999999998 - type: precision_at_5 value: 13.111 - type: recall_at_1 value: 28.616000000000003 - type: recall_at_10 value: 66.956 - type: recall_at_100 value: 87.657 - type: recall_at_1000 value: 95.548 - type: recall_at_3 value: 47.453 - type: recall_at_5 value: 55.87800000000001 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: None metrics: - type: accuracy value: 69.04141326382856 - type: ap value: 77.47589122111044 - type: f1 value: 66.6332277374775 - task: type: PairClassification dataset: name: MTEB PPC type: PL-MTEB/ppc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.4 - type: cos_sim_ap value: 94.1044939667201 - type: cos_sim_f1 value: 88.78048780487805 - type: cos_sim_precision value: 87.22044728434504 - type: cos_sim_recall value: 90.39735099337747 - type: dot_accuracy value: 86.4 - type: dot_ap value: 94.1044939667201 - type: dot_f1 value: 88.78048780487805 - type: dot_precision value: 87.22044728434504 - type: dot_recall value: 90.39735099337747 - type: euclidean_accuracy value: 86.4 - type: euclidean_ap value: 94.1044939667201 - type: euclidean_f1 value: 88.78048780487805 - type: euclidean_precision value: 87.22044728434504 - type: euclidean_recall value: 90.39735099337747 - type: manhattan_accuracy value: 86.4 - type: manhattan_ap value: 94.11438365697387 - type: manhattan_f1 value: 88.77968877968877 - type: manhattan_precision value: 87.84440842787681 - type: manhattan_recall value: 89.73509933774835 - type: max_accuracy value: 86.4 - type: max_ap value: 94.11438365697387 - type: max_f1 value: 88.78048780487805 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 97.86641929499072 - type: cos_sim_ap value: 99.36904211868182 - type: cos_sim_f1 value: 96.56203288490283 - type: cos_sim_precision value: 94.72140762463343 - type: cos_sim_recall value: 98.47560975609755 - type: dot_accuracy value: 97.86641929499072 - type: dot_ap value: 99.36904211868183 - type: dot_f1 value: 96.56203288490283 - type: dot_precision value: 94.72140762463343 - type: dot_recall value: 98.47560975609755 - type: euclidean_accuracy value: 97.86641929499072 - type: euclidean_ap value: 99.36904211868183 - type: euclidean_f1 value: 96.56203288490283 - type: euclidean_precision value: 94.72140762463343 - type: euclidean_recall value: 98.47560975609755 - type: manhattan_accuracy value: 98.14471243042672 - type: manhattan_ap value: 99.43359540492416 - type: manhattan_f1 value: 96.98795180722892 - type: manhattan_precision value: 95.83333333333334 - type: manhattan_recall value: 98.17073170731707 - type: max_accuracy value: 98.14471243042672 - type: max_ap value: 99.43359540492416 - type: max_f1 value: 96.98795180722892 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: None metrics: - type: accuracy value: 89.39058171745152 - type: f1 value: 86.8552093529568 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: None metrics: - type: accuracy value: 74.97975708502024 - type: f1 value: 58.73081628832407 - task: type: Retrieval dataset: name: MTEB Quora-PL type: clarin-knext/quora-pl config: default split: test revision: 0be27e93455051e531182b85e85e425aba12e9d4 metrics: - type: map_at_1 value: 64.917 - type: map_at_10 value: 78.74600000000001 - type: map_at_100 value: 79.501 - type: map_at_1000 value: 79.524 - type: map_at_3 value: 75.549 - type: map_at_5 value: 77.495 - type: mrr_at_1 value: 74.9 - type: mrr_at_10 value: 82.112 - type: mrr_at_100 value: 82.314 - type: mrr_at_1000 value: 82.317 - type: mrr_at_3 value: 80.745 - type: mrr_at_5 value: 81.607 - type: ndcg_at_1 value: 74.83999999999999 - type: ndcg_at_10 value: 83.214 - type: ndcg_at_100 value: 84.997 - type: ndcg_at_1000 value: 85.207 - type: ndcg_at_3 value: 79.547 - type: ndcg_at_5 value: 81.46600000000001 - type: precision_at_1 value: 74.83999999999999 - type: precision_at_10 value: 12.822 - type: precision_at_100 value: 1.506 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 34.903 - type: precision_at_5 value: 23.16 - type: recall_at_1 value: 64.917 - type: recall_at_10 value: 92.27199999999999 - type: recall_at_100 value: 98.715 - type: recall_at_1000 value: 99.854 - type: recall_at_3 value: 82.04599999999999 - type: recall_at_5 value: 87.2 - task: type: Retrieval dataset: name: MTEB SCIDOCS-PL type: clarin-knext/scidocs-pl config: default split: test revision: 45452b03f05560207ef19149545f168e596c9337 metrics: - type: map_at_1 value: 3.51 - type: map_at_10 value: 9.046999999999999 - type: map_at_100 value: 10.823 - type: map_at_1000 value: 11.144 - type: map_at_3 value: 6.257 - type: map_at_5 value: 7.648000000000001 - type: mrr_at_1 value: 17.299999999999997 - type: mrr_at_10 value: 27.419 - type: mrr_at_100 value: 28.618 - type: mrr_at_1000 value: 28.685 - type: mrr_at_3 value: 23.817 - type: mrr_at_5 value: 25.927 - type: ndcg_at_1 value: 17.299999999999997 - type: ndcg_at_10 value: 16.084 - type: ndcg_at_100 value: 23.729 - type: ndcg_at_1000 value: 29.476999999999997 - type: ndcg_at_3 value: 14.327000000000002 - type: ndcg_at_5 value: 13.017999999999999 - type: precision_at_1 value: 17.299999999999997 - type: precision_at_10 value: 8.63 - type: precision_at_100 value: 1.981 - type: precision_at_1000 value: 0.336 - type: precision_at_3 value: 13.4 - type: precision_at_5 value: 11.700000000000001 - type: recall_at_1 value: 3.51 - type: recall_at_10 value: 17.518 - type: recall_at_100 value: 40.275 - type: recall_at_1000 value: 68.203 - type: recall_at_3 value: 8.155 - type: recall_at_5 value: 11.875 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.30248675091724 - type: cos_sim_ap value: 83.6756734006714 - type: cos_sim_f1 value: 74.97367497367497 - type: cos_sim_precision value: 73.91003460207612 - type: cos_sim_recall value: 76.06837606837607 - type: dot_accuracy value: 86.30248675091724 - type: dot_ap value: 83.6756734006714 - type: dot_f1 value: 74.97367497367497 - type: dot_precision value: 73.91003460207612 - type: dot_recall value: 76.06837606837607 - type: euclidean_accuracy value: 86.30248675091724 - type: euclidean_ap value: 83.67566984333091 - type: euclidean_f1 value: 74.97367497367497 - type: euclidean_precision value: 73.91003460207612 - type: euclidean_recall value: 76.06837606837607 - type: manhattan_accuracy value: 86.28210354667753 - type: manhattan_ap value: 83.64216119130171 - type: manhattan_f1 value: 74.92152075340078 - type: manhattan_precision value: 73.4107997265892 - type: manhattan_recall value: 76.49572649572649 - type: max_accuracy value: 86.30248675091724 - type: max_ap value: 83.6756734006714 - type: max_f1 value: 74.97367497367497 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 82.23295940859121 - type: cos_sim_spearman value: 78.89329160768719 - type: euclidean_pearson value: 79.56019107076818 - type: euclidean_spearman value: 78.89330209904084 - type: manhattan_pearson value: 79.76098513973719 - type: manhattan_spearman value: 79.05490162570123 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 37.732606308062486 - type: cos_sim_spearman value: 41.01645667030284 - type: euclidean_pearson value: 26.61722556367085 - type: euclidean_spearman value: 41.01645667030284 - type: manhattan_pearson value: 26.60917378970807 - type: manhattan_spearman value: 41.51335727617614 - task: type: Retrieval dataset: name: MTEB SciFact-PL type: clarin-knext/scifact-pl config: default split: test revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e metrics: - type: map_at_1 value: 54.31700000000001 - type: map_at_10 value: 65.564 - type: map_at_100 value: 66.062 - type: map_at_1000 value: 66.08699999999999 - type: map_at_3 value: 62.592999999999996 - type: map_at_5 value: 63.888 - type: mrr_at_1 value: 56.99999999999999 - type: mrr_at_10 value: 66.412 - type: mrr_at_100 value: 66.85900000000001 - type: mrr_at_1000 value: 66.88 - type: mrr_at_3 value: 64.22200000000001 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 56.99999999999999 - type: ndcg_at_10 value: 70.577 - type: ndcg_at_100 value: 72.879 - type: ndcg_at_1000 value: 73.45 - type: ndcg_at_3 value: 65.5 - type: ndcg_at_5 value: 67.278 - type: precision_at_1 value: 56.99999999999999 - type: precision_at_10 value: 9.667 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.0 - type: precision_at_5 value: 16.933 - type: recall_at_1 value: 54.31700000000001 - type: recall_at_10 value: 85.056 - type: recall_at_100 value: 95.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 71.0 - type: recall_at_5 value: 75.672 - task: type: Retrieval dataset: name: MTEB TRECCOVID-PL type: clarin-knext/trec-covid-pl config: default split: test revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd metrics: - type: map_at_1 value: 0.245 - type: map_at_10 value: 2.051 - type: map_at_100 value: 12.009 - type: map_at_1000 value: 27.448 - type: map_at_3 value: 0.721 - type: map_at_5 value: 1.13 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.0 - type: mrr_at_100 value: 93.0 - type: mrr_at_1000 value: 93.0 - type: mrr_at_3 value: 93.0 - type: mrr_at_5 value: 93.0 - type: ndcg_at_1 value: 85.0 - type: ndcg_at_10 value: 80.303 - type: ndcg_at_100 value: 61.23499999999999 - type: ndcg_at_1000 value: 52.978 - type: ndcg_at_3 value: 84.419 - type: ndcg_at_5 value: 82.976 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 83.39999999999999 - type: precision_at_100 value: 61.96 - type: precision_at_1000 value: 22.648 - type: precision_at_3 value: 89.333 - type: precision_at_5 value: 87.2 - type: recall_at_1 value: 0.245 - type: recall_at_10 value: 2.193 - type: recall_at_100 value: 14.938 - type: recall_at_1000 value: 48.563 - type: recall_at_3 value: 0.738 - type: recall_at_5 value: 1.173 --- # mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ```
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
# mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo mmazeem/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ```
{"base_model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "license": "apache-2.0", "tags": ["mteb", "sentence-transformers", "transformers", "Qwen2", "sentence-similarity", "llama-cpp", "gguf-my-repo"], "model-index": [{"name": "gte-qwen2-7B-instruct", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 91.31343283582089}, {"type": "ap", "value": 67.64251402604096}, {"type": "f1", "value": 87.53372530755692}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 97.497825}, {"type": "ap", "value": 96.30329547047529}, {"type": "f1", "value": 97.49769793778039}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 62.564}, {"type": "f1", "value": 60.975777935041066}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "mteb/arguana", "config": "default", "split": "test", "revision": "c22ab2a51041ffd869aaddef7af8d8215647e41a"}, "metrics": [{"type": "map_at_1", "value": 36.486000000000004}, {"type": "map_at_10", "value": 54.842}, {"type": "map_at_100", "value": 55.206999999999994}, {"type": "map_at_1000", "value": 55.206999999999994}, {"type": "map_at_3", "value": 49.893}, {"type": "map_at_5", "value": 53.105000000000004}, {"type": "mrr_at_1", "value": 37.34}, {"type": "mrr_at_10", "value": 55.143}, {"type": "mrr_at_100", "value": 55.509}, {"type": "mrr_at_1000", "value": 55.509}, {"type": "mrr_at_3", "value": 50.212999999999994}, {"type": "mrr_at_5", "value": 53.432}, {"type": "ndcg_at_1", "value": 36.486000000000004}, {"type": "ndcg_at_10", "value": 64.273}, {"type": "ndcg_at_100", "value": 65.66199999999999}, {"type": "ndcg_at_1000", "value": 65.66199999999999}, {"type": "ndcg_at_3", "value": 54.352999999999994}, {"type": "ndcg_at_5", "value": 60.131}, {"type": "precision_at_1", "value": 36.486000000000004}, {"type": "precision_at_10", "value": 9.395000000000001}, {"type": "precision_at_100", "value": 0.996}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 22.428}, {"type": "precision_at_5", "value": 16.259}, {"type": "recall_at_1", "value": 36.486000000000004}, {"type": "recall_at_10", "value": 93.95400000000001}, {"type": "recall_at_100", "value": 99.644}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 67.283}, {"type": "recall_at_5", "value": 81.294}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 56.461169803700564}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 51.73600434466286}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 67.57827065898053}, {"type": "mrr", "value": 79.08136569493911}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.53324575999243}, {"type": "cos_sim_spearman", "value": 81.37173362822374}, {"type": "euclidean_pearson", "value": 82.19243335103444}, {"type": "euclidean_spearman", "value": 81.33679307304334}, {"type": "manhattan_pearson", "value": 82.38752665975699}, {"type": "manhattan_spearman", "value": 81.31510583189689}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 87.56818181818181}, {"type": "f1", "value": 87.25826722019875}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 50.09239610327673}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 46.64733054606282}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "f46a197baaae43b4f621051089b82a364682dfeb"}, "metrics": [{"type": "map_at_1", "value": 33.997}, {"type": "map_at_10", "value": 48.176}, {"type": "map_at_100", "value": 49.82}, {"type": "map_at_1000", "value": 49.924}, {"type": "map_at_3", "value": 43.626}, {"type": "map_at_5", "value": 46.275}, {"type": "mrr_at_1", "value": 42.059999999999995}, {"type": "mrr_at_10", "value": 53.726}, {"type": "mrr_at_100", "value": 54.398}, {"type": "mrr_at_1000", "value": 54.416}, {"type": "mrr_at_3", "value": 50.714999999999996}, {"type": "mrr_at_5", "value": 52.639}, {"type": "ndcg_at_1", "value": 42.059999999999995}, {"type": "ndcg_at_10", "value": 55.574999999999996}, {"type": "ndcg_at_100", "value": 60.744}, {"type": "ndcg_at_1000", "value": 61.85699999999999}, {"type": "ndcg_at_3", "value": 49.363}, {"type": "ndcg_at_5", "value": 52.44}, {"type": "precision_at_1", "value": 42.059999999999995}, {"type": "precision_at_10", "value": 11.101999999999999}, {"type": "precision_at_100", "value": 1.73}, {"type": "precision_at_1000", "value": 0.218}, {"type": "precision_at_3", "value": 24.464}, {"type": "precision_at_5", "value": 18.026}, {"type": "recall_at_1", "value": 33.997}, {"type": "recall_at_10", "value": 70.35900000000001}, {"type": "recall_at_100", "value": 91.642}, {"type": "recall_at_1000", "value": 97.977}, {"type": "recall_at_3", "value": 52.76}, {"type": "recall_at_5", "value": 61.148}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackEnglishRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "ad9991cb51e31e31e430383c75ffb2885547b5f0"}, "metrics": [{"type": "map_at_1", "value": 35.884}, {"type": "map_at_10", "value": 48.14}, {"type": "map_at_100", "value": 49.5}, {"type": "map_at_1000", "value": 49.63}, {"type": "map_at_3", "value": 44.646}, {"type": "map_at_5", "value": 46.617999999999995}, {"type": "mrr_at_1", "value": 44.458999999999996}, {"type": "mrr_at_10", "value": 53.751000000000005}, {"type": "mrr_at_100", "value": 54.37800000000001}, {"type": "mrr_at_1000", "value": 54.415}, {"type": "mrr_at_3", "value": 51.815}, {"type": "mrr_at_5", "value": 52.882}, {"type": "ndcg_at_1", "value": 44.458999999999996}, {"type": "ndcg_at_10", "value": 54.157}, {"type": "ndcg_at_100", "value": 58.362}, {"type": "ndcg_at_1000", "value": 60.178}, {"type": "ndcg_at_3", "value": 49.661}, {"type": "ndcg_at_5", "value": 51.74999999999999}, {"type": "precision_at_1", "value": 44.458999999999996}, {"type": "precision_at_10", "value": 10.248}, {"type": "precision_at_100", "value": 1.5890000000000002}, {"type": "precision_at_1000", "value": 0.207}, {"type": "precision_at_3", "value": 23.928}, {"type": "precision_at_5", "value": 16.878999999999998}, {"type": "recall_at_1", "value": 35.884}, {"type": "recall_at_10", "value": 64.798}, {"type": "recall_at_100", "value": 82.345}, {"type": "recall_at_1000", "value": 93.267}, {"type": "recall_at_3", "value": 51.847}, {"type": "recall_at_5", "value": 57.601}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGamingRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "4885aa143210c98657558c04aaf3dc47cfb54340"}, "metrics": [{"type": "map_at_1", "value": 39.383}, {"type": "map_at_10", "value": 53.714}, {"type": "map_at_100", "value": 54.838}, {"type": "map_at_1000", "value": 54.87800000000001}, {"type": "map_at_3", "value": 50.114999999999995}, {"type": "map_at_5", "value": 52.153000000000006}, {"type": "mrr_at_1", "value": 45.016}, {"type": "mrr_at_10", "value": 56.732000000000006}, {"type": "mrr_at_100", "value": 57.411}, {"type": "mrr_at_1000", "value": 57.431}, {"type": "mrr_at_3", "value": 54.044000000000004}, {"type": "mrr_at_5", "value": 55.639}, {"type": "ndcg_at_1", "value": 45.016}, {"type": "ndcg_at_10", "value": 60.228}, {"type": "ndcg_at_100", "value": 64.277}, {"type": "ndcg_at_1000", "value": 65.07}, {"type": "ndcg_at_3", "value": 54.124}, {"type": "ndcg_at_5", "value": 57.147000000000006}, {"type": "precision_at_1", "value": 45.016}, {"type": "precision_at_10", "value": 9.937}, {"type": "precision_at_100", "value": 1.288}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_3", "value": 24.471999999999998}, {"type": "precision_at_5", "value": 16.991}, {"type": "recall_at_1", "value": 39.383}, {"type": "recall_at_10", "value": 76.175}, {"type": "recall_at_100", "value": 93.02}, {"type": "recall_at_1000", "value": 98.60900000000001}, {"type": "recall_at_3", "value": 60.265}, {"type": "recall_at_5", "value": 67.46600000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGisRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "5003b3064772da1887988e05400cf3806fe491f2"}, "metrics": [{"type": "map_at_1", "value": 27.426000000000002}, {"type": "map_at_10", "value": 37.397000000000006}, {"type": "map_at_100", "value": 38.61}, {"type": "map_at_1000", "value": 38.678000000000004}, {"type": "map_at_3", "value": 34.150999999999996}, {"type": "map_at_5", "value": 36.137}, {"type": "mrr_at_1", "value": 29.944}, {"type": "mrr_at_10", "value": 39.654}, {"type": "mrr_at_100", "value": 40.638000000000005}, {"type": "mrr_at_1000", "value": 40.691}, {"type": "mrr_at_3", "value": 36.817}, {"type": "mrr_at_5", "value": 38.524}, {"type": "ndcg_at_1", "value": 29.944}, {"type": "ndcg_at_10", "value": 43.094}, {"type": "ndcg_at_100", "value": 48.789}, {"type": "ndcg_at_1000", "value": 50.339999999999996}, {"type": "ndcg_at_3", "value": 36.984}, {"type": "ndcg_at_5", "value": 40.248}, {"type": "precision_at_1", "value": 29.944}, {"type": "precision_at_10", "value": 6.78}, {"type": "precision_at_100", "value": 1.024}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 15.895000000000001}, {"type": "precision_at_5", "value": 11.39}, {"type": "recall_at_1", "value": 27.426000000000002}, {"type": "recall_at_10", "value": 58.464000000000006}, {"type": "recall_at_100", "value": 84.193}, {"type": "recall_at_1000", "value": 95.52000000000001}, {"type": "recall_at_3", "value": 42.172}, {"type": "recall_at_5", "value": 50.101}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackMathematicaRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "90fceea13679c63fe563ded68f3b6f06e50061de"}, "metrics": [{"type": "map_at_1", "value": 19.721}, {"type": "map_at_10", "value": 31.604}, {"type": "map_at_100", "value": 32.972}, {"type": "map_at_1000", "value": 33.077}, {"type": "map_at_3", "value": 27.218999999999998}, {"type": "map_at_5", "value": 29.53}, {"type": "mrr_at_1", "value": 25.0}, {"type": "mrr_at_10", "value": 35.843}, {"type": "mrr_at_100", "value": 36.785000000000004}, {"type": "mrr_at_1000", "value": 36.842000000000006}, {"type": "mrr_at_3", "value": 32.193}, {"type": "mrr_at_5", "value": 34.264}, {"type": "ndcg_at_1", "value": 25.0}, {"type": "ndcg_at_10", "value": 38.606}, {"type": "ndcg_at_100", "value": 44.272}, {"type": "ndcg_at_1000", "value": 46.527}, {"type": "ndcg_at_3", "value": 30.985000000000003}, {"type": "ndcg_at_5", "value": 34.43}, {"type": "precision_at_1", "value": 25.0}, {"type": "precision_at_10", "value": 7.811}, {"type": "precision_at_100", "value": 1.203}, {"type": "precision_at_1000", "value": 0.15}, {"type": "precision_at_3", "value": 15.423}, {"type": "precision_at_5", "value": 11.791}, {"type": "recall_at_1", "value": 19.721}, {"type": "recall_at_10", "value": 55.625}, {"type": "recall_at_100", "value": 79.34400000000001}, {"type": "recall_at_1000", "value": 95.208}, {"type": "recall_at_3", "value": 35.19}, {"type": "recall_at_5", "value": 43.626}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackPhysicsRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4"}, "metrics": [{"type": "map_at_1", "value": 33.784}, {"type": "map_at_10", "value": 47.522}, {"type": "map_at_100", "value": 48.949999999999996}, {"type": "map_at_1000", "value": 49.038}, {"type": "map_at_3", "value": 43.284}, {"type": "map_at_5", "value": 45.629}, {"type": "mrr_at_1", "value": 41.482}, {"type": "mrr_at_10", "value": 52.830999999999996}, {"type": "mrr_at_100", "value": 53.559999999999995}, {"type": "mrr_at_1000", "value": 53.588}, {"type": "mrr_at_3", "value": 50.016000000000005}, {"type": "mrr_at_5", "value": 51.614000000000004}, {"type": "ndcg_at_1", "value": 41.482}, {"type": "ndcg_at_10", "value": 54.569}, {"type": "ndcg_at_100", "value": 59.675999999999995}, {"type": "ndcg_at_1000", "value": 60.989000000000004}, {"type": "ndcg_at_3", "value": 48.187000000000005}, {"type": "ndcg_at_5", "value": 51.183}, {"type": "precision_at_1", "value": 41.482}, {"type": "precision_at_10", "value": 10.221}, {"type": "precision_at_100", "value": 1.486}, {"type": "precision_at_1000", "value": 0.17500000000000002}, {"type": "precision_at_3", "value": 23.548}, {"type": "precision_at_5", "value": 16.805}, {"type": "recall_at_1", "value": 33.784}, {"type": "recall_at_10", "value": 69.798}, {"type": "recall_at_100", "value": 90.098}, {"type": "recall_at_1000", "value": 98.176}, {"type": "recall_at_3", "value": 52.127}, {"type": "recall_at_5", "value": 59.861}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackProgrammersRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "6184bc1440d2dbc7612be22b50686b8826d22b32"}, "metrics": [{"type": "map_at_1", "value": 28.038999999999998}, {"type": "map_at_10", "value": 41.904}, {"type": "map_at_100", "value": 43.36}, {"type": "map_at_1000", "value": 43.453}, {"type": "map_at_3", "value": 37.785999999999994}, {"type": "map_at_5", "value": 40.105000000000004}, {"type": "mrr_at_1", "value": 35.046}, {"type": "mrr_at_10", "value": 46.926}, {"type": "mrr_at_100", "value": 47.815000000000005}, {"type": "mrr_at_1000", "value": 47.849000000000004}, {"type": "mrr_at_3", "value": 44.273}, {"type": "mrr_at_5", "value": 45.774}, {"type": "ndcg_at_1", "value": 35.046}, {"type": "ndcg_at_10", "value": 48.937000000000005}, {"type": "ndcg_at_100", "value": 54.544000000000004}, {"type": "ndcg_at_1000", "value": 56.069}, {"type": "ndcg_at_3", "value": 42.858000000000004}, {"type": "ndcg_at_5", "value": 45.644}, {"type": "precision_at_1", "value": 35.046}, {"type": "precision_at_10", "value": 9.452}, {"type": "precision_at_100", "value": 1.429}, {"type": "precision_at_1000", "value": 0.173}, {"type": "precision_at_3", "value": 21.346999999999998}, {"type": "precision_at_5", "value": 15.342}, {"type": "recall_at_1", "value": 28.038999999999998}, {"type": "recall_at_10", "value": 64.59700000000001}, {"type": "recall_at_100", "value": 87.735}, {"type": "recall_at_1000", "value": 97.41300000000001}, {"type": "recall_at_3", "value": 47.368}, {"type": "recall_at_5", "value": 54.93900000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "4ffe81d471b1924886b33c7567bfb200e9eec5c4"}, "metrics": [{"type": "map_at_1", "value": 28.17291666666667}, {"type": "map_at_10", "value": 40.025749999999995}, {"type": "map_at_100", "value": 41.39208333333333}, {"type": "map_at_1000", "value": 41.499249999999996}, {"type": "map_at_3", "value": 36.347}, {"type": "map_at_5", "value": 38.41391666666667}, {"type": "mrr_at_1", "value": 33.65925}, {"type": "mrr_at_10", "value": 44.085499999999996}, {"type": "mrr_at_100", "value": 44.94116666666667}, {"type": "mrr_at_1000", "value": 44.9855}, {"type": "mrr_at_3", "value": 41.2815}, {"type": "mrr_at_5", "value": 42.91491666666666}, {"type": "ndcg_at_1", "value": 33.65925}, {"type": "ndcg_at_10", "value": 46.430833333333325}, {"type": "ndcg_at_100", "value": 51.761}, {"type": "ndcg_at_1000", "value": 53.50899999999999}, {"type": "ndcg_at_3", "value": 40.45133333333333}, {"type": "ndcg_at_5", "value": 43.31483333333334}, {"type": "precision_at_1", "value": 33.65925}, {"type": "precision_at_10", "value": 8.4995}, {"type": "precision_at_100", "value": 1.3210000000000004}, {"type": "precision_at_1000", "value": 0.16591666666666666}, {"type": "precision_at_3", "value": 19.165083333333335}, {"type": "precision_at_5", "value": 13.81816666666667}, {"type": "recall_at_1", "value": 28.17291666666667}, {"type": "recall_at_10", "value": 61.12624999999999}, {"type": "recall_at_100", "value": 83.97266666666667}, {"type": "recall_at_1000", "value": 95.66550000000001}, {"type": "recall_at_3", "value": 44.661249999999995}, {"type": "recall_at_5", "value": 51.983333333333334}, {"type": "map_at_1", "value": 17.936}, {"type": "map_at_10", "value": 27.399}, {"type": "map_at_100", "value": 28.632}, {"type": "map_at_1000", "value": 28.738000000000003}, {"type": "map_at_3", "value": 24.456}, {"type": "map_at_5", "value": 26.06}, {"type": "mrr_at_1", "value": 19.224}, {"type": "mrr_at_10", "value": 28.998}, {"type": "mrr_at_100", "value": 30.11}, {"type": "mrr_at_1000", "value": 30.177}, {"type": "mrr_at_3", "value": 26.247999999999998}, {"type": "mrr_at_5", "value": 27.708}, {"type": "ndcg_at_1", "value": 19.224}, {"type": "ndcg_at_10", "value": 32.911}, {"type": "ndcg_at_100", "value": 38.873999999999995}, {"type": "ndcg_at_1000", "value": 41.277}, {"type": "ndcg_at_3", "value": 27.142}, {"type": "ndcg_at_5", "value": 29.755}, {"type": "precision_at_1", "value": 19.224}, {"type": "precision_at_10", "value": 5.6930000000000005}, {"type": "precision_at_100", "value": 0.9259999999999999}, {"type": "precision_at_1000", "value": 0.126}, {"type": "precision_at_3", "value": 12.138}, {"type": "precision_at_5", "value": 8.909}, {"type": "recall_at_1", "value": 17.936}, {"type": "recall_at_10", "value": 48.096}, {"type": "recall_at_100", "value": 75.389}, {"type": "recall_at_1000", "value": 92.803}, {"type": "recall_at_3", "value": 32.812999999999995}, {"type": "recall_at_5", "value": 38.851}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackStatsRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "65ac3a16b8e91f9cee4c9828cc7c335575432a2a"}, "metrics": [{"type": "map_at_1", "value": 24.681}, {"type": "map_at_10", "value": 34.892}, {"type": "map_at_100", "value": 35.996}, {"type": "map_at_1000", "value": 36.083}, {"type": "map_at_3", "value": 31.491999999999997}, {"type": "map_at_5", "value": 33.632}, {"type": "mrr_at_1", "value": 28.528}, {"type": "mrr_at_10", "value": 37.694}, {"type": "mrr_at_100", "value": 38.613}, {"type": "mrr_at_1000", "value": 38.668}, {"type": "mrr_at_3", "value": 34.714}, {"type": "mrr_at_5", "value": 36.616}, {"type": "ndcg_at_1", "value": 28.528}, {"type": "ndcg_at_10", "value": 40.703}, {"type": "ndcg_at_100", "value": 45.993}, {"type": "ndcg_at_1000", "value": 47.847}, {"type": "ndcg_at_3", "value": 34.622}, {"type": "ndcg_at_5", "value": 38.035999999999994}, {"type": "precision_at_1", "value": 28.528}, {"type": "precision_at_10", "value": 6.902}, {"type": "precision_at_100", "value": 1.0370000000000001}, {"type": "precision_at_1000", "value": 0.126}, {"type": "precision_at_3", "value": 15.798000000000002}, {"type": "precision_at_5", "value": 11.655999999999999}, {"type": "recall_at_1", "value": 24.681}, {"type": "recall_at_10", "value": 55.81}, {"type": "recall_at_100", "value": 79.785}, {"type": "recall_at_1000", "value": 92.959}, {"type": "recall_at_3", "value": 39.074}, {"type": "recall_at_5", "value": 47.568}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackTexRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "46989137a86843e03a6195de44b09deda022eec7"}, "metrics": [{"type": "map_at_1", "value": 18.627}, {"type": "map_at_10", "value": 27.872000000000003}, {"type": "map_at_100", "value": 29.237999999999996}, {"type": "map_at_1000", "value": 29.363}, {"type": "map_at_3", "value": 24.751}, {"type": "map_at_5", "value": 26.521}, {"type": "mrr_at_1", "value": 23.021}, {"type": "mrr_at_10", "value": 31.924000000000003}, {"type": "mrr_at_100", "value": 32.922000000000004}, {"type": "mrr_at_1000", "value": 32.988}, {"type": "mrr_at_3", "value": 29.192}, {"type": "mrr_at_5", "value": 30.798}, {"type": "ndcg_at_1", "value": 23.021}, {"type": "ndcg_at_10", "value": 33.535}, {"type": "ndcg_at_100", "value": 39.732}, {"type": "ndcg_at_1000", "value": 42.201}, {"type": "ndcg_at_3", "value": 28.153}, {"type": "ndcg_at_5", "value": 30.746000000000002}, {"type": "precision_at_1", "value": 23.021}, {"type": "precision_at_10", "value": 6.459}, {"type": "precision_at_100", "value": 1.1320000000000001}, {"type": "precision_at_1000", "value": 0.153}, {"type": "precision_at_3", "value": 13.719000000000001}, {"type": "precision_at_5", "value": 10.193000000000001}, {"type": "recall_at_1", "value": 18.627}, {"type": "recall_at_10", "value": 46.463}, {"type": "recall_at_100", "value": 74.226}, {"type": "recall_at_1000", "value": 91.28500000000001}, {"type": "recall_at_3", "value": 31.357000000000003}, {"type": "recall_at_5", "value": 38.067}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackUnixRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "6c6430d3a6d36f8d2a829195bc5dc94d7e063e53"}, "metrics": [{"type": "map_at_1", "value": 31.457}, {"type": "map_at_10", "value": 42.888}, {"type": "map_at_100", "value": 44.24}, {"type": "map_at_1000", "value": 44.327}, {"type": "map_at_3", "value": 39.588}, {"type": "map_at_5", "value": 41.423}, {"type": "mrr_at_1", "value": 37.126999999999995}, {"type": "mrr_at_10", "value": 47.083000000000006}, {"type": "mrr_at_100", "value": 47.997}, {"type": "mrr_at_1000", "value": 48.044}, {"type": "mrr_at_3", "value": 44.574000000000005}, {"type": "mrr_at_5", "value": 46.202}, {"type": "ndcg_at_1", "value": 37.126999999999995}, {"type": "ndcg_at_10", "value": 48.833}, {"type": "ndcg_at_100", "value": 54.327000000000005}, {"type": "ndcg_at_1000", "value": 56.011}, {"type": "ndcg_at_3", "value": 43.541999999999994}, {"type": "ndcg_at_5", "value": 46.127}, {"type": "precision_at_1", "value": 37.126999999999995}, {"type": "precision_at_10", "value": 8.376999999999999}, {"type": "precision_at_100", "value": 1.2309999999999999}, {"type": "precision_at_1000", "value": 0.146}, {"type": "precision_at_3", "value": 20.211000000000002}, {"type": "precision_at_5", "value": 14.16}, {"type": "recall_at_1", "value": 31.457}, {"type": "recall_at_10", "value": 62.369}, {"type": "recall_at_100", "value": 85.444}, {"type": "recall_at_1000", "value": 96.65599999999999}, {"type": "recall_at_3", "value": 47.961}, {"type": "recall_at_5", "value": 54.676}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWebmastersRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "160c094312a0e1facb97e55eeddb698c0abe3571"}, "metrics": [{"type": "map_at_1", "value": 27.139999999999997}, {"type": "map_at_10", "value": 38.801}, {"type": "map_at_100", "value": 40.549}, {"type": "map_at_1000", "value": 40.802}, {"type": "map_at_3", "value": 35.05}, {"type": "map_at_5", "value": 36.884}, {"type": "mrr_at_1", "value": 33.004}, {"type": "mrr_at_10", "value": 43.864}, {"type": "mrr_at_100", "value": 44.667}, {"type": "mrr_at_1000", "value": 44.717}, {"type": "mrr_at_3", "value": 40.777}, {"type": "mrr_at_5", "value": 42.319}, {"type": "ndcg_at_1", "value": 33.004}, {"type": "ndcg_at_10", "value": 46.022}, {"type": "ndcg_at_100", "value": 51.542}, {"type": "ndcg_at_1000", "value": 53.742000000000004}, {"type": "ndcg_at_3", "value": 39.795}, {"type": "ndcg_at_5", "value": 42.272}, {"type": "precision_at_1", "value": 33.004}, {"type": "precision_at_10", "value": 9.012}, {"type": "precision_at_100", "value": 1.7770000000000001}, {"type": "precision_at_1000", "value": 0.26}, {"type": "precision_at_3", "value": 19.038}, {"type": "precision_at_5", "value": 13.675999999999998}, {"type": "recall_at_1", "value": 27.139999999999997}, {"type": "recall_at_10", "value": 60.961}, {"type": "recall_at_100", "value": 84.451}, {"type": "recall_at_1000", "value": 98.113}, {"type": "recall_at_3", "value": 43.001}, {"type": "recall_at_5", "value": 49.896}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "mteb/climate-fever", "config": "default", "split": "test", "revision": "47f2ac6acb640fc46020b02a5b59fdda04d39380"}, "metrics": [{"type": "map_at_1", "value": 22.076999999999998}, {"type": "map_at_10", "value": 35.44}, {"type": "map_at_100", "value": 37.651}, {"type": "map_at_1000", "value": 37.824999999999996}, {"type": "map_at_3", "value": 30.764999999999997}, {"type": "map_at_5", "value": 33.26}, {"type": "mrr_at_1", "value": 50.163000000000004}, {"type": "mrr_at_10", "value": 61.207}, {"type": "mrr_at_100", "value": 61.675000000000004}, {"type": "mrr_at_1000", "value": 61.692}, {"type": "mrr_at_3", "value": 58.60999999999999}, {"type": "mrr_at_5", "value": 60.307}, {"type": "ndcg_at_1", "value": 50.163000000000004}, {"type": "ndcg_at_10", "value": 45.882}, {"type": "ndcg_at_100", "value": 53.239999999999995}, {"type": "ndcg_at_1000", "value": 55.852000000000004}, {"type": "ndcg_at_3", "value": 40.514}, {"type": "ndcg_at_5", "value": 42.038}, {"type": "precision_at_1", "value": 50.163000000000004}, {"type": "precision_at_10", "value": 13.466000000000001}, {"type": "precision_at_100", "value": 2.164}, {"type": "precision_at_1000", "value": 0.266}, {"type": "precision_at_3", "value": 29.707}, {"type": "precision_at_5", "value": 21.694}, {"type": "recall_at_1", "value": 22.076999999999998}, {"type": "recall_at_10", "value": 50.193}, {"type": "recall_at_100", "value": 74.993}, {"type": "recall_at_1000", "value": 89.131}, {"type": "recall_at_3", "value": 35.472}, {"type": "recall_at_5", "value": 41.814}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "mteb/dbpedia", "config": "default", "split": "test", "revision": "c0f706b76e590d620bd6618b3ca8efdd34e2d659"}, "metrics": [{"type": "map_at_1", "value": 9.953}, {"type": "map_at_10", "value": 24.515}, {"type": "map_at_100", "value": 36.173}, {"type": "map_at_1000", "value": 38.351}, {"type": "map_at_3", "value": 16.592000000000002}, {"type": "map_at_5", "value": 20.036}, {"type": "mrr_at_1", "value": 74.25}, {"type": "mrr_at_10", "value": 81.813}, {"type": "mrr_at_100", "value": 82.006}, {"type": "mrr_at_1000", "value": 82.011}, {"type": "mrr_at_3", "value": 80.875}, {"type": "mrr_at_5", "value": 81.362}, {"type": "ndcg_at_1", "value": 62.5}, {"type": "ndcg_at_10", "value": 52.42}, {"type": "ndcg_at_100", "value": 56.808}, {"type": "ndcg_at_1000", "value": 63.532999999999994}, {"type": "ndcg_at_3", "value": 56.654}, {"type": "ndcg_at_5", "value": 54.18300000000001}, {"type": "precision_at_1", "value": 74.25}, {"type": "precision_at_10", "value": 42.699999999999996}, {"type": "precision_at_100", "value": 13.675}, {"type": "precision_at_1000", "value": 2.664}, {"type": "precision_at_3", "value": 60.5}, {"type": "precision_at_5", "value": 52.800000000000004}, {"type": "recall_at_1", "value": 9.953}, {"type": "recall_at_10", "value": 30.253999999999998}, {"type": "recall_at_100", "value": 62.516000000000005}, {"type": "recall_at_1000", "value": 84.163}, {"type": "recall_at_3", "value": 18.13}, {"type": "recall_at_5", "value": 22.771}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 79.455}, {"type": "f1", "value": 74.16798697647569}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "mteb/fever", "config": "default", "split": "test", "revision": "bea83ef9e8fb933d90a2f1d5515737465d613e12"}, "metrics": [{"type": "map_at_1", "value": 87.531}, {"type": "map_at_10", "value": 93.16799999999999}, {"type": "map_at_100", "value": 93.341}, {"type": "map_at_1000", "value": 93.349}, {"type": "map_at_3", "value": 92.444}, {"type": "map_at_5", "value": 92.865}, {"type": "mrr_at_1", "value": 94.014}, {"type": "mrr_at_10", "value": 96.761}, {"type": "mrr_at_100", "value": 96.762}, {"type": "mrr_at_1000", "value": 96.762}, {"type": "mrr_at_3", "value": 96.672}, {"type": "mrr_at_5", "value": 96.736}, {"type": "ndcg_at_1", "value": 94.014}, {"type": "ndcg_at_10", "value": 95.112}, {"type": "ndcg_at_100", "value": 95.578}, {"type": "ndcg_at_1000", "value": 95.68900000000001}, {"type": "ndcg_at_3", "value": 94.392}, {"type": "ndcg_at_5", "value": 94.72500000000001}, {"type": "precision_at_1", "value": 94.014}, {"type": "precision_at_10", "value": 11.065}, {"type": "precision_at_100", "value": 1.157}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 35.259}, {"type": "precision_at_5", "value": 21.599}, {"type": "recall_at_1", "value": 87.531}, {"type": "recall_at_10", "value": 97.356}, {"type": "recall_at_100", "value": 98.965}, {"type": "recall_at_1000", "value": 99.607}, {"type": "recall_at_3", "value": 95.312}, {"type": "recall_at_5", "value": 96.295}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "mteb/fiqa", "config": "default", "split": "test", "revision": "27a168819829fe9bcd655c2df245fb19452e8e06"}, "metrics": [{"type": "map_at_1", "value": 32.055}, {"type": "map_at_10", "value": 53.114}, {"type": "map_at_100", "value": 55.235}, {"type": "map_at_1000", "value": 55.345}, {"type": "map_at_3", "value": 45.854}, {"type": "map_at_5", "value": 50.025}, {"type": "mrr_at_1", "value": 60.34}, {"type": "mrr_at_10", "value": 68.804}, {"type": "mrr_at_100", "value": 69.309}, {"type": "mrr_at_1000", "value": 69.32199999999999}, {"type": "mrr_at_3", "value": 66.40899999999999}, {"type": "mrr_at_5", "value": 67.976}, {"type": "ndcg_at_1", "value": 60.34}, {"type": "ndcg_at_10", "value": 62.031000000000006}, {"type": "ndcg_at_100", "value": 68.00500000000001}, {"type": "ndcg_at_1000", "value": 69.286}, {"type": "ndcg_at_3", "value": 56.355999999999995}, {"type": "ndcg_at_5", "value": 58.687}, {"type": "precision_at_1", "value": 60.34}, {"type": "precision_at_10", "value": 17.176}, {"type": "precision_at_100", "value": 2.36}, {"type": "precision_at_1000", "value": 0.259}, {"type": "precision_at_3", "value": 37.14}, {"type": "precision_at_5", "value": 27.809}, {"type": "recall_at_1", "value": 32.055}, {"type": "recall_at_10", "value": 70.91}, {"type": "recall_at_100", "value": 91.83}, {"type": "recall_at_1000", "value": 98.871}, {"type": "recall_at_3", "value": 51.202999999999996}, {"type": "recall_at_5", "value": 60.563}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "mteb/hotpotqa", "config": "default", "split": "test", "revision": "ab518f4d6fcca38d87c25209f94beba119d02014"}, "metrics": [{"type": "map_at_1", "value": 43.68}, {"type": "map_at_10", "value": 64.389}, {"type": "map_at_100", "value": 65.24}, {"type": "map_at_1000", "value": 65.303}, {"type": "map_at_3", "value": 61.309000000000005}, {"type": "map_at_5", "value": 63.275999999999996}, {"type": "mrr_at_1", "value": 87.36}, {"type": "mrr_at_10", "value": 91.12}, {"type": "mrr_at_100", "value": 91.227}, {"type": "mrr_at_1000", "value": 91.229}, {"type": "mrr_at_3", "value": 90.57600000000001}, {"type": "mrr_at_5", "value": 90.912}, {"type": "ndcg_at_1", "value": 87.36}, {"type": "ndcg_at_10", "value": 73.076}, {"type": "ndcg_at_100", "value": 75.895}, {"type": "ndcg_at_1000", "value": 77.049}, {"type": "ndcg_at_3", "value": 68.929}, {"type": "ndcg_at_5", "value": 71.28}, {"type": "precision_at_1", "value": 87.36}, {"type": "precision_at_10", "value": 14.741000000000001}, {"type": "precision_at_100", "value": 1.694}, {"type": "precision_at_1000", "value": 0.185}, {"type": "precision_at_3", "value": 43.043}, {"type": "precision_at_5", "value": 27.681}, {"type": "recall_at_1", "value": 43.68}, {"type": "recall_at_10", "value": 73.707}, {"type": "recall_at_100", "value": 84.7}, {"type": "recall_at_1000", "value": 92.309}, {"type": "recall_at_3", "value": 64.564}, {"type": "recall_at_5", "value": 69.203}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 96.75399999999999}, {"type": "ap", "value": 95.29389839242187}, {"type": "f1", "value": 96.75348377433475}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "mteb/msmarco", "config": "default", "split": "dev", "revision": "c5a29a104738b98a9e76336939199e264163d4a0"}, "metrics": [{"type": "map_at_1", "value": 25.176}, {"type": "map_at_10", "value": 38.598}, {"type": "map_at_100", "value": 39.707}, {"type": "map_at_1000", "value": 39.744}, {"type": "map_at_3", "value": 34.566}, {"type": "map_at_5", "value": 36.863}, {"type": "mrr_at_1", "value": 25.874000000000002}, {"type": "mrr_at_10", "value": 39.214}, {"type": "mrr_at_100", "value": 40.251}, {"type": "mrr_at_1000", "value": 40.281}, {"type": "mrr_at_3", "value": 35.291}, {"type": "mrr_at_5", "value": 37.545}, {"type": "ndcg_at_1", "value": 25.874000000000002}, {"type": "ndcg_at_10", "value": 45.98}, {"type": "ndcg_at_100", "value": 51.197}, {"type": "ndcg_at_1000", "value": 52.073}, {"type": "ndcg_at_3", "value": 37.785999999999994}, {"type": "ndcg_at_5", "value": 41.870000000000005}, {"type": "precision_at_1", "value": 25.874000000000002}, {"type": "precision_at_10", "value": 7.181}, {"type": "precision_at_100", "value": 0.979}, {"type": "precision_at_1000", "value": 0.106}, {"type": "precision_at_3", "value": 16.051000000000002}, {"type": "precision_at_5", "value": 11.713}, {"type": "recall_at_1", "value": 25.176}, {"type": "recall_at_10", "value": 68.67699999999999}, {"type": "recall_at_100", "value": 92.55}, {"type": "recall_at_1000", "value": 99.164}, {"type": "recall_at_3", "value": 46.372}, {"type": "recall_at_5", "value": 56.16}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 99.03784769721841}, {"type": "f1", "value": 98.97791641821495}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 91.88326493388054}, {"type": "f1", "value": 73.74809928034335}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 85.41358439811701}, {"type": "f1", "value": 83.503679460639}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 89.77135171486215}, {"type": "f1", "value": 88.89843747468366}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 46.22695362087359}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 44.132372165849425}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 33.35680810650402}, {"type": "mrr", "value": 34.72625715637218}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "mteb/nfcorpus", "config": "default", "split": "test", "revision": "ec0fa4fe99da2ff19ca1214b7966684033a58814"}, "metrics": [{"type": "map_at_1", "value": 7.165000000000001}, {"type": "map_at_10", "value": 15.424}, {"type": "map_at_100", "value": 20.28}, {"type": "map_at_1000", "value": 22.065}, {"type": "map_at_3", "value": 11.236}, {"type": "map_at_5", "value": 13.025999999999998}, {"type": "mrr_at_1", "value": 51.702999999999996}, {"type": "mrr_at_10", "value": 59.965}, {"type": "mrr_at_100", "value": 60.667}, {"type": "mrr_at_1000", "value": 60.702999999999996}, {"type": "mrr_at_3", "value": 58.772000000000006}, {"type": "mrr_at_5", "value": 59.267}, {"type": "ndcg_at_1", "value": 49.536}, {"type": "ndcg_at_10", "value": 40.6}, {"type": "ndcg_at_100", "value": 37.848}, {"type": "ndcg_at_1000", "value": 46.657}, {"type": "ndcg_at_3", "value": 46.117999999999995}, {"type": "ndcg_at_5", "value": 43.619}, {"type": "precision_at_1", "value": 51.393}, {"type": "precision_at_10", "value": 30.31}, {"type": "precision_at_100", "value": 9.972}, {"type": "precision_at_1000", "value": 2.329}, {"type": "precision_at_3", "value": 43.137}, {"type": "precision_at_5", "value": 37.585}, {"type": "recall_at_1", "value": 7.165000000000001}, {"type": "recall_at_10", "value": 19.689999999999998}, {"type": "recall_at_100", "value": 39.237}, {"type": "recall_at_1000", "value": 71.417}, {"type": "recall_at_3", "value": 12.247}, {"type": "recall_at_5", "value": 14.902999999999999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "mteb/nq", "config": "default", "split": "test", "revision": "b774495ed302d8c44a3a7ea25c90dbce03968f31"}, "metrics": [{"type": "map_at_1", "value": 42.653999999999996}, {"type": "map_at_10", "value": 59.611999999999995}, {"type": "map_at_100", "value": 60.32300000000001}, {"type": "map_at_1000", "value": 60.336}, {"type": "map_at_3", "value": 55.584999999999994}, {"type": "map_at_5", "value": 58.19}, {"type": "mrr_at_1", "value": 47.683}, {"type": "mrr_at_10", "value": 62.06700000000001}, {"type": "mrr_at_100", "value": 62.537}, {"type": "mrr_at_1000", "value": 62.544999999999995}, {"type": "mrr_at_3", "value": 59.178}, {"type": "mrr_at_5", "value": 61.034}, {"type": "ndcg_at_1", "value": 47.654}, {"type": "ndcg_at_10", "value": 67.001}, {"type": "ndcg_at_100", "value": 69.73899999999999}, {"type": "ndcg_at_1000", "value": 69.986}, {"type": "ndcg_at_3", "value": 59.95700000000001}, {"type": "ndcg_at_5", "value": 64.025}, {"type": "precision_at_1", "value": 47.654}, {"type": "precision_at_10", "value": 10.367999999999999}, {"type": "precision_at_100", "value": 1.192}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_3", "value": 26.651000000000003}, {"type": "precision_at_5", "value": 18.459}, {"type": "recall_at_1", "value": 42.653999999999996}, {"type": "recall_at_10", "value": 86.619}, {"type": "recall_at_100", "value": 98.04899999999999}, {"type": "recall_at_1000", "value": 99.812}, {"type": "recall_at_3", "value": 68.987}, {"type": "recall_at_5", "value": 78.158}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "mteb/quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 72.538}, {"type": "map_at_10", "value": 86.702}, {"type": "map_at_100", "value": 87.31}, {"type": "map_at_1000", "value": 87.323}, {"type": "map_at_3", "value": 83.87}, {"type": "map_at_5", "value": 85.682}, {"type": "mrr_at_1", "value": 83.31}, {"type": "mrr_at_10", "value": 89.225}, {"type": "mrr_at_100", "value": 89.30399999999999}, {"type": "mrr_at_1000", "value": 89.30399999999999}, {"type": "mrr_at_3", "value": 88.44300000000001}, {"type": "mrr_at_5", "value": 89.005}, {"type": "ndcg_at_1", "value": 83.32000000000001}, {"type": "ndcg_at_10", "value": 90.095}, {"type": "ndcg_at_100", "value": 91.12}, {"type": "ndcg_at_1000", "value": 91.179}, {"type": "ndcg_at_3", "value": 87.606}, {"type": "ndcg_at_5", "value": 89.031}, {"type": "precision_at_1", "value": 83.32000000000001}, {"type": "precision_at_10", "value": 13.641}, {"type": "precision_at_100", "value": 1.541}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 38.377}, {"type": "precision_at_5", "value": 25.162000000000003}, {"type": "recall_at_1", "value": 72.538}, {"type": "recall_at_10", "value": 96.47200000000001}, {"type": "recall_at_100", "value": 99.785}, {"type": "recall_at_1000", "value": 99.99900000000001}, {"type": "recall_at_3", "value": 89.278}, {"type": "recall_at_5", "value": 93.367}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 73.55219145406065}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 74.13437105242755}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "mteb/scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.873}, {"type": "map_at_10", "value": 17.944}, {"type": "map_at_100", "value": 21.171}, {"type": "map_at_1000", "value": 21.528}, {"type": "map_at_3", "value": 12.415}, {"type": "map_at_5", "value": 15.187999999999999}, {"type": "mrr_at_1", "value": 33.800000000000004}, {"type": "mrr_at_10", "value": 46.455}, {"type": "mrr_at_100", "value": 47.378}, {"type": "mrr_at_1000", "value": 47.394999999999996}, {"type": "mrr_at_3", "value": 42.367}, {"type": "mrr_at_5", "value": 44.972}, {"type": "ndcg_at_1", "value": 33.800000000000004}, {"type": "ndcg_at_10", "value": 28.907}, {"type": "ndcg_at_100", "value": 39.695}, {"type": "ndcg_at_1000", "value": 44.582}, {"type": "ndcg_at_3", "value": 26.949}, {"type": "ndcg_at_5", "value": 23.988}, {"type": "precision_at_1", "value": 33.800000000000004}, {"type": "precision_at_10", "value": 15.079999999999998}, {"type": "precision_at_100", "value": 3.056}, {"type": "precision_at_1000", "value": 0.42100000000000004}, {"type": "precision_at_3", "value": 25.167}, {"type": "precision_at_5", "value": 21.26}, {"type": "recall_at_1", "value": 6.873}, {"type": "recall_at_10", "value": 30.568}, {"type": "recall_at_100", "value": 62.062}, {"type": "recall_at_1000", "value": 85.37700000000001}, {"type": "recall_at_3", "value": 15.312999999999999}, {"type": "recall_at_5", "value": 21.575}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.37009118256057}, {"type": "cos_sim_spearman", "value": 79.27986395671529}, {"type": "euclidean_pearson", "value": 79.18037715442115}, {"type": "euclidean_spearman", "value": 79.28004791561621}, {"type": "manhattan_pearson", "value": 79.34062972800541}, {"type": "manhattan_spearman", "value": 79.43106695543402}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.48474767383833}, {"type": "cos_sim_spearman", "value": 79.54505388752513}, {"type": "euclidean_pearson", "value": 83.43282704179565}, {"type": "euclidean_spearman", "value": 79.54579919925405}, {"type": "manhattan_pearson", "value": 83.77564492427952}, {"type": "manhattan_spearman", "value": 79.84558396989286}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.803698035802}, {"type": "cos_sim_spearman", "value": 88.83451367754881}, {"type": "euclidean_pearson", "value": 88.28939285711628}, {"type": "euclidean_spearman", "value": 88.83528996073112}, {"type": "manhattan_pearson", "value": 88.28017412671795}, {"type": "manhattan_spearman", "value": 88.9228828016344}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.27469288153428}, {"type": "cos_sim_spearman", "value": 83.87477064876288}, {"type": "euclidean_pearson", "value": 84.2601737035379}, {"type": "euclidean_spearman", "value": 83.87431082479074}, {"type": "manhattan_pearson", "value": 84.3621547772745}, {"type": "manhattan_spearman", "value": 84.12094375000423}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.12749863201587}, {"type": "cos_sim_spearman", "value": 88.54287568368565}, {"type": "euclidean_pearson", "value": 87.90429700607999}, {"type": "euclidean_spearman", "value": 88.5437689576261}, {"type": "manhattan_pearson", "value": 88.19276653356833}, {"type": "manhattan_spearman", "value": 88.99995393814679}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.68398747560902}, {"type": "cos_sim_spearman", "value": 86.48815303460574}, {"type": "euclidean_pearson", "value": 85.52356631237954}, {"type": "euclidean_spearman", "value": 86.486391949551}, {"type": "manhattan_pearson", "value": 85.67267981761788}, {"type": "manhattan_spearman", "value": 86.7073696332485}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.9057107443124}, {"type": "cos_sim_spearman", "value": 88.7312168757697}, {"type": "euclidean_pearson", "value": 88.72810439714794}, {"type": "euclidean_spearman", "value": 88.71976185854771}, {"type": "manhattan_pearson", "value": 88.50433745949111}, {"type": "manhattan_spearman", "value": 88.51726175544195}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 67.59391795109886}, {"type": "cos_sim_spearman", "value": 66.87613008631367}, {"type": "euclidean_pearson", "value": 69.23198488262217}, {"type": "euclidean_spearman", "value": 66.85427723013692}, {"type": "manhattan_pearson", "value": 69.50730124841084}, {"type": "manhattan_spearman", "value": 67.10404669820792}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.0820605344619}, {"type": "cos_sim_spearman", "value": 86.8518089863434}, {"type": "euclidean_pearson", "value": 86.31087134689284}, {"type": "euclidean_spearman", "value": 86.8518520517941}, {"type": "manhattan_pearson", "value": 86.47203796160612}, {"type": "manhattan_spearman", "value": 87.1080149734421}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 89.09255369305481}, {"type": "mrr", "value": 97.10323445617563}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "mteb/scifact", "config": "default", "split": "test", "revision": "0228b52cf27578f30900b9e5271d331663a030d7"}, "metrics": [{"type": "map_at_1", "value": 61.260999999999996}, {"type": "map_at_10", "value": 74.043}, {"type": "map_at_100", "value": 74.37700000000001}, {"type": "map_at_1000", "value": 74.384}, {"type": "map_at_3", "value": 71.222}, {"type": "map_at_5", "value": 72.875}, {"type": "mrr_at_1", "value": 64.333}, {"type": "mrr_at_10", "value": 74.984}, {"type": "mrr_at_100", "value": 75.247}, {"type": "mrr_at_1000", "value": 75.25500000000001}, {"type": "mrr_at_3", "value": 73.167}, {"type": "mrr_at_5", "value": 74.35000000000001}, {"type": "ndcg_at_1", "value": 64.333}, {"type": "ndcg_at_10", "value": 79.06}, {"type": "ndcg_at_100", "value": 80.416}, {"type": "ndcg_at_1000", "value": 80.55600000000001}, {"type": "ndcg_at_3", "value": 74.753}, {"type": "ndcg_at_5", "value": 76.97500000000001}, {"type": "precision_at_1", "value": 64.333}, {"type": "precision_at_10", "value": 10.567}, {"type": "precision_at_100", "value": 1.1199999999999999}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 29.889}, {"type": "precision_at_5", "value": 19.533}, {"type": "recall_at_1", "value": 61.260999999999996}, {"type": "recall_at_10", "value": 93.167}, {"type": "recall_at_100", "value": 99.0}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 81.667}, {"type": "recall_at_5", "value": 87.394}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.71980198019801}, {"type": "cos_sim_ap", "value": 92.81616007802704}, {"type": "cos_sim_f1", "value": 85.17548454688318}, {"type": "cos_sim_precision", "value": 89.43894389438944}, {"type": "cos_sim_recall", "value": 81.3}, {"type": "dot_accuracy", "value": 99.71980198019801}, {"type": "dot_ap", "value": 92.81398760591358}, {"type": "dot_f1", "value": 85.17548454688318}, {"type": "dot_precision", "value": 89.43894389438944}, {"type": "dot_recall", "value": 81.3}, {"type": "euclidean_accuracy", "value": 99.71980198019801}, {"type": "euclidean_ap", "value": 92.81560637245072}, {"type": "euclidean_f1", "value": 85.17548454688318}, {"type": "euclidean_precision", "value": 89.43894389438944}, {"type": "euclidean_recall", "value": 81.3}, {"type": "manhattan_accuracy", "value": 99.73069306930694}, {"type": "manhattan_ap", "value": 93.14005487480794}, {"type": "manhattan_f1", "value": 85.56263269639068}, {"type": "manhattan_precision", "value": 91.17647058823529}, {"type": "manhattan_recall", "value": 80.60000000000001}, {"type": "max_accuracy", "value": 99.73069306930694}, {"type": "max_ap", "value": 93.14005487480794}, {"type": "max_f1", "value": 85.56263269639068}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 79.86443362395185}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 49.40897096662564}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 55.66040806627947}, {"type": "mrr", "value": 56.58670475766064}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 31.51015090598575}, {"type": "cos_sim_spearman", "value": 31.35016454939226}, {"type": "dot_pearson", "value": 31.5150068731}, {"type": "dot_spearman", "value": 31.34790869023487}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "mteb/trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.254}, {"type": "map_at_10", "value": 2.064}, {"type": "map_at_100", "value": 12.909}, {"type": "map_at_1000", "value": 31.761}, {"type": "map_at_3", "value": 0.738}, {"type": "map_at_5", "value": 1.155}, {"type": "mrr_at_1", "value": 96.0}, {"type": "mrr_at_10", "value": 98.0}, {"type": "mrr_at_100", "value": 98.0}, {"type": "mrr_at_1000", "value": 98.0}, {"type": "mrr_at_3", "value": 98.0}, {"type": "mrr_at_5", "value": 98.0}, {"type": "ndcg_at_1", "value": 93.0}, {"type": "ndcg_at_10", "value": 82.258}, {"type": "ndcg_at_100", "value": 64.34}, {"type": "ndcg_at_1000", "value": 57.912}, {"type": "ndcg_at_3", "value": 90.827}, {"type": "ndcg_at_5", "value": 86.79}, {"type": "precision_at_1", "value": 96.0}, {"type": "precision_at_10", "value": 84.8}, {"type": "precision_at_100", "value": 66.0}, {"type": "precision_at_1000", "value": 25.356}, {"type": "precision_at_3", "value": 94.667}, {"type": "precision_at_5", "value": 90.4}, {"type": "recall_at_1", "value": 0.254}, {"type": "recall_at_10", "value": 2.1950000000000003}, {"type": "recall_at_100", "value": 16.088}, {"type": "recall_at_1000", "value": 54.559000000000005}, {"type": "recall_at_3", "value": 0.75}, {"type": "recall_at_5", "value": 1.191}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "mteb/touche2020", "config": "default", "split": "test", "revision": "a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f"}, "metrics": [{"type": "map_at_1", "value": 2.976}, {"type": "map_at_10", "value": 11.389000000000001}, {"type": "map_at_100", "value": 18.429000000000002}, {"type": "map_at_1000", "value": 20.113}, {"type": "map_at_3", "value": 6.483}, {"type": "map_at_5", "value": 8.770999999999999}, {"type": "mrr_at_1", "value": 40.816}, {"type": "mrr_at_10", "value": 58.118}, {"type": "mrr_at_100", "value": 58.489999999999995}, {"type": "mrr_at_1000", "value": 58.489999999999995}, {"type": "mrr_at_3", "value": 53.061}, {"type": "mrr_at_5", "value": 57.041}, {"type": "ndcg_at_1", "value": 40.816}, {"type": "ndcg_at_10", "value": 30.567}, {"type": "ndcg_at_100", "value": 42.44}, {"type": "ndcg_at_1000", "value": 53.480000000000004}, {"type": "ndcg_at_3", "value": 36.016}, {"type": "ndcg_at_5", "value": 34.257}, {"type": "precision_at_1", "value": 42.857}, {"type": "precision_at_10", "value": 25.714}, {"type": "precision_at_100", "value": 8.429}, {"type": "precision_at_1000", "value": 1.5939999999999999}, {"type": "precision_at_3", "value": 36.735}, {"type": "precision_at_5", "value": 33.878}, {"type": "recall_at_1", "value": 2.976}, {"type": "recall_at_10", "value": 17.854999999999997}, {"type": "recall_at_100", "value": 51.833}, {"type": "recall_at_1000", "value": 86.223}, {"type": "recall_at_3", "value": 7.887}, {"type": "recall_at_5", "value": 12.026}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 85.1174}, {"type": "ap", "value": 30.169441069345748}, {"type": "f1", "value": 69.79254701873245}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 72.58347481607245}, {"type": "f1", "value": 72.74877295564937}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 53.90586138221305}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 87.35769207844072}, {"type": "cos_sim_ap", "value": 77.9645072410354}, {"type": "cos_sim_f1", "value": 71.32352941176471}, {"type": "cos_sim_precision", "value": 66.5903890160183}, {"type": "cos_sim_recall", "value": 76.78100263852242}, {"type": "dot_accuracy", "value": 87.37557370209214}, {"type": "dot_ap", "value": 77.96250046429908}, {"type": "dot_f1", "value": 71.28932757557064}, {"type": "dot_precision", "value": 66.95249130938586}, {"type": "dot_recall", "value": 76.22691292875989}, {"type": "euclidean_accuracy", "value": 87.35173153722357}, {"type": "euclidean_ap", "value": 77.96520460741593}, {"type": "euclidean_f1", "value": 71.32470733210104}, {"type": "euclidean_precision", "value": 66.91329479768785}, {"type": "euclidean_recall", "value": 76.35883905013192}, {"type": "manhattan_accuracy", "value": 87.25636287774931}, {"type": "manhattan_ap", "value": 77.77752485611796}, {"type": "manhattan_f1", "value": 71.18148599269183}, {"type": "manhattan_precision", "value": 66.10859728506787}, {"type": "manhattan_recall", "value": 77.0976253298153}, {"type": "max_accuracy", "value": 87.37557370209214}, {"type": "max_ap", "value": 77.96520460741593}, {"type": "max_f1", "value": 71.32470733210104}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.38176737687739}, {"type": "cos_sim_ap", "value": 86.58811861657401}, {"type": "cos_sim_f1", "value": 79.09430644097604}, {"type": "cos_sim_precision", "value": 75.45085977911366}, {"type": "cos_sim_recall", "value": 83.10748383122882}, {"type": "dot_accuracy", "value": 89.38370784336554}, {"type": "dot_ap", "value": 86.58840606004333}, {"type": "dot_f1", "value": 79.10179860068133}, {"type": "dot_precision", "value": 75.44546153308643}, {"type": "dot_recall", "value": 83.13058207576223}, {"type": "euclidean_accuracy", "value": 89.38564830985369}, {"type": "euclidean_ap", "value": 86.58820721061164}, {"type": "euclidean_f1", "value": 79.09070942235888}, {"type": "euclidean_precision", "value": 75.38729937194697}, {"type": "euclidean_recall", "value": 83.17677856482906}, {"type": "manhattan_accuracy", "value": 89.40699344122326}, {"type": "manhattan_ap", "value": 86.60631843011362}, {"type": "manhattan_f1", "value": 79.14949970570925}, {"type": "manhattan_precision", "value": 75.78191039729502}, {"type": "manhattan_recall", "value": 82.83030489682784}, {"type": "max_accuracy", "value": 89.40699344122326}, {"type": "max_ap", "value": 86.60631843011362}, {"type": "max_f1", "value": 79.14949970570925}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB AFQMC", "type": "C-MTEB/AFQMC", "config": "default", "split": "validation", "revision": "b44c3b011063adb25877c13823db83bb193913c4"}, "metrics": [{"type": "cos_sim_pearson", "value": 65.58442135663871}, {"type": "cos_sim_spearman", "value": 72.2538631361313}, {"type": "euclidean_pearson", "value": 70.97255486607429}, {"type": "euclidean_spearman", "value": 72.25374250228647}, {"type": "manhattan_pearson", "value": 70.83250199989911}, {"type": "manhattan_spearman", "value": 72.14819496536272}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB ATEC", "type": "C-MTEB/ATEC", "config": "default", "split": "test", "revision": "0f319b1142f28d00e055a6770f3f726ae9b7d865"}, "metrics": [{"type": "cos_sim_pearson", "value": 59.99478404929932}, {"type": "cos_sim_spearman", "value": 62.61836216999812}, {"type": "euclidean_pearson", "value": 66.86429811933593}, {"type": "euclidean_spearman", "value": 62.6183520374191}, {"type": "manhattan_pearson", "value": 66.8063778911633}, {"type": "manhattan_spearman", "value": 62.569607573241115}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (zh)", "type": "mteb/amazon_reviews_multi", "config": "zh", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 53.98400000000001}, {"type": "f1", "value": 51.21447361350723}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BQ", "type": "C-MTEB/BQ", "config": "default", "split": "test", "revision": "e3dda5e115e487b39ec7e618c0c6a29137052a55"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.11941660686553}, {"type": "cos_sim_spearman", "value": 81.25029594540435}, {"type": "euclidean_pearson", "value": 82.06973504238826}, {"type": "euclidean_spearman", "value": 81.2501989488524}, {"type": "manhattan_pearson", "value": 82.10094630392753}, {"type": "manhattan_spearman", "value": 81.27987244392389}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB CLSClusteringP2P", "type": "C-MTEB/CLSClusteringP2P", "config": "default", "split": "test", "revision": "4b6227591c6c1a73bc76b1055f3b7f3588e72476"}, "metrics": [{"type": "v_measure", "value": 47.07270168705156}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB CLSClusteringS2S", "type": "C-MTEB/CLSClusteringS2S", "config": "default", "split": "test", "revision": "e458b3f5414b62b7f9f83499ac1f5497ae2e869f"}, "metrics": [{"type": "v_measure", "value": 45.98511703185043}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB CMedQAv1", "type": "C-MTEB/CMedQAv1-reranking", "config": "default", "split": "test", "revision": "8d7f1e942507dac42dc58017c1a001c3717da7df"}, "metrics": [{"type": "map", "value": 88.19895157194931}, {"type": "mrr", "value": 90.21424603174603}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB CMedQAv2", "type": "C-MTEB/CMedQAv2-reranking", "config": "default", "split": "test", "revision": "23d186750531a14a0357ca22cd92d712fd512ea0"}, "metrics": [{"type": "map", "value": 88.03317320980119}, {"type": "mrr", "value": 89.9461507936508}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CmedqaRetrieval", "type": "C-MTEB/CmedqaRetrieval", "config": "default", "split": "dev", "revision": "cd540c506dae1cf9e9a59c3e06f42030d54e7301"}, "metrics": [{"type": "map_at_1", "value": 29.037000000000003}, {"type": "map_at_10", "value": 42.001}, {"type": "map_at_100", "value": 43.773}, {"type": "map_at_1000", "value": 43.878}, {"type": "map_at_3", "value": 37.637}, {"type": "map_at_5", "value": 40.034}, {"type": "mrr_at_1", "value": 43.136}, {"type": "mrr_at_10", "value": 51.158}, {"type": "mrr_at_100", "value": 52.083}, {"type": "mrr_at_1000", "value": 52.12}, {"type": "mrr_at_3", "value": 48.733}, {"type": "mrr_at_5", "value": 50.025}, {"type": "ndcg_at_1", "value": 43.136}, {"type": "ndcg_at_10", "value": 48.685}, {"type": "ndcg_at_100", "value": 55.513}, {"type": "ndcg_at_1000", "value": 57.242000000000004}, {"type": "ndcg_at_3", "value": 43.329}, {"type": "ndcg_at_5", "value": 45.438}, {"type": "precision_at_1", "value": 43.136}, {"type": "precision_at_10", "value": 10.56}, {"type": "precision_at_100", "value": 1.6129999999999998}, {"type": "precision_at_1000", "value": 0.184}, {"type": "precision_at_3", "value": 24.064}, {"type": "precision_at_5", "value": 17.269000000000002}, {"type": "recall_at_1", "value": 29.037000000000003}, {"type": "recall_at_10", "value": 59.245000000000005}, {"type": "recall_at_100", "value": 87.355}, {"type": "recall_at_1000", "value": 98.74000000000001}, {"type": "recall_at_3", "value": 42.99}, {"type": "recall_at_5", "value": 49.681999999999995}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB Cmnli", "type": "C-MTEB/CMNLI", "config": "default", "split": "validation", "revision": "41bc36f332156f7adc9e38f53777c959b2ae9766"}, "metrics": [{"type": "cos_sim_accuracy", "value": 82.68190018039687}, {"type": "cos_sim_ap", "value": 90.18017125327886}, {"type": "cos_sim_f1", "value": 83.64080906868193}, {"type": "cos_sim_precision", "value": 79.7076890489303}, {"type": "cos_sim_recall", "value": 87.98223053542202}, {"type": "dot_accuracy", "value": 82.68190018039687}, {"type": "dot_ap", "value": 90.18782350103646}, {"type": "dot_f1", "value": 83.64242087729039}, {"type": "dot_precision", "value": 79.65313028764805}, {"type": "dot_recall", "value": 88.05237315875614}, {"type": "euclidean_accuracy", "value": 82.68190018039687}, {"type": "euclidean_ap", "value": 90.1801957900632}, {"type": "euclidean_f1", "value": 83.63636363636364}, {"type": "euclidean_precision", "value": 79.52772506852203}, {"type": "euclidean_recall", "value": 88.19265840542437}, {"type": "manhattan_accuracy", "value": 82.14070956103427}, {"type": "manhattan_ap", "value": 89.96178420101427}, {"type": "manhattan_f1", "value": 83.21087838578791}, {"type": "manhattan_precision", "value": 78.35605121850475}, {"type": "manhattan_recall", "value": 88.70703764320785}, {"type": "max_accuracy", "value": 82.68190018039687}, {"type": "max_ap", "value": 90.18782350103646}, {"type": "max_f1", "value": 83.64242087729039}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CovidRetrieval", "type": "C-MTEB/CovidRetrieval", "config": "default", "split": "dev", "revision": "1271c7809071a13532e05f25fb53511ffce77117"}, "metrics": [{"type": "map_at_1", "value": 72.234}, {"type": "map_at_10", "value": 80.10000000000001}, {"type": "map_at_100", "value": 80.36}, {"type": "map_at_1000", "value": 80.363}, {"type": "map_at_3", "value": 78.315}, {"type": "map_at_5", "value": 79.607}, {"type": "mrr_at_1", "value": 72.392}, {"type": "mrr_at_10", "value": 80.117}, {"type": "mrr_at_100", "value": 80.36999999999999}, {"type": "mrr_at_1000", "value": 80.373}, {"type": "mrr_at_3", "value": 78.469}, {"type": "mrr_at_5", "value": 79.633}, {"type": "ndcg_at_1", "value": 72.392}, {"type": "ndcg_at_10", "value": 83.651}, {"type": "ndcg_at_100", "value": 84.749}, {"type": "ndcg_at_1000", "value": 84.83000000000001}, {"type": "ndcg_at_3", "value": 80.253}, {"type": "ndcg_at_5", "value": 82.485}, {"type": "precision_at_1", "value": 72.392}, {"type": "precision_at_10", "value": 9.557}, {"type": "precision_at_100", "value": 1.004}, {"type": "precision_at_1000", "value": 0.101}, {"type": "precision_at_3", "value": 28.732000000000003}, {"type": "precision_at_5", "value": 18.377}, {"type": "recall_at_1", "value": 72.234}, {"type": "recall_at_10", "value": 94.573}, {"type": "recall_at_100", "value": 99.368}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 85.669}, {"type": "recall_at_5", "value": 91.01700000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DuRetrieval", "type": "C-MTEB/DuRetrieval", "config": "default", "split": "dev", "revision": "a1a333e290fe30b10f3f56498e3a0d911a693ced"}, "metrics": [{"type": "map_at_1", "value": 26.173999999999996}, {"type": "map_at_10", "value": 80.04}, {"type": "map_at_100", "value": 82.94500000000001}, {"type": "map_at_1000", "value": 82.98100000000001}, {"type": "map_at_3", "value": 55.562999999999995}, {"type": "map_at_5", "value": 69.89800000000001}, {"type": "mrr_at_1", "value": 89.5}, {"type": "mrr_at_10", "value": 92.996}, {"type": "mrr_at_100", "value": 93.06400000000001}, {"type": "mrr_at_1000", "value": 93.065}, {"type": "mrr_at_3", "value": 92.658}, {"type": "mrr_at_5", "value": 92.84599999999999}, {"type": "ndcg_at_1", "value": 89.5}, {"type": "ndcg_at_10", "value": 87.443}, {"type": "ndcg_at_100", "value": 90.253}, {"type": "ndcg_at_1000", "value": 90.549}, {"type": "ndcg_at_3", "value": 85.874}, {"type": "ndcg_at_5", "value": 84.842}, {"type": "precision_at_1", "value": 89.5}, {"type": "precision_at_10", "value": 41.805}, {"type": "precision_at_100", "value": 4.827}, {"type": "precision_at_1000", "value": 0.49}, {"type": "precision_at_3", "value": 76.85}, {"type": "precision_at_5", "value": 64.8}, {"type": "recall_at_1", "value": 26.173999999999996}, {"type": "recall_at_10", "value": 89.101}, {"type": "recall_at_100", "value": 98.08099999999999}, {"type": "recall_at_1000", "value": 99.529}, {"type": "recall_at_3", "value": 57.902}, {"type": "recall_at_5", "value": 74.602}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB EcomRetrieval", "type": "C-MTEB/EcomRetrieval", "config": "default", "split": "dev", "revision": "687de13dc7294d6fd9be10c6945f9e8fec8166b9"}, "metrics": [{"type": "map_at_1", "value": 56.10000000000001}, {"type": "map_at_10", "value": 66.15299999999999}, {"type": "map_at_100", "value": 66.625}, {"type": "map_at_1000", "value": 66.636}, {"type": "map_at_3", "value": 63.632999999999996}, {"type": "map_at_5", "value": 65.293}, {"type": "mrr_at_1", "value": 56.10000000000001}, {"type": "mrr_at_10", "value": 66.15299999999999}, {"type": "mrr_at_100", "value": 66.625}, {"type": "mrr_at_1000", "value": 66.636}, {"type": "mrr_at_3", "value": 63.632999999999996}, {"type": "mrr_at_5", "value": 65.293}, {"type": "ndcg_at_1", "value": 56.10000000000001}, {"type": "ndcg_at_10", "value": 71.146}, {"type": "ndcg_at_100", "value": 73.27799999999999}, {"type": "ndcg_at_1000", "value": 73.529}, {"type": "ndcg_at_3", "value": 66.09}, {"type": "ndcg_at_5", "value": 69.08999999999999}, {"type": "precision_at_1", "value": 56.10000000000001}, {"type": "precision_at_10", "value": 8.68}, {"type": "precision_at_100", "value": 0.964}, {"type": "precision_at_1000", "value": 0.098}, {"type": "precision_at_3", "value": 24.4}, {"type": "precision_at_5", "value": 16.1}, {"type": "recall_at_1", "value": 56.10000000000001}, {"type": "recall_at_10", "value": 86.8}, {"type": "recall_at_100", "value": 96.39999999999999}, {"type": "recall_at_1000", "value": 98.3}, {"type": "recall_at_3", "value": 73.2}, {"type": "recall_at_5", "value": 80.5}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB IFlyTek", "type": "C-MTEB/IFlyTek-classification", "config": "default", "split": "validation", "revision": "421605374b29664c5fc098418fe20ada9bd55f8a"}, "metrics": [{"type": "accuracy", "value": 54.52096960369373}, {"type": "f1", "value": 40.930845295808695}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB JDReview", "type": "C-MTEB/JDReview-classification", "config": "default", "split": "test", "revision": "b7c64bd89eb87f8ded463478346f76731f07bf8b"}, "metrics": [{"type": "accuracy", "value": 86.51031894934334}, {"type": "ap", "value": 55.9516014323483}, {"type": "f1", "value": 81.54813679326381}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB LCQMC", "type": "C-MTEB/LCQMC", "config": "default", "split": "test", "revision": "17f9b096f80380fce5ed12a9be8be7784b337daf"}, "metrics": [{"type": "cos_sim_pearson", "value": 69.67437838574276}, {"type": "cos_sim_spearman", "value": 73.81314174653045}, {"type": "euclidean_pearson", "value": 72.63430276680275}, {"type": "euclidean_spearman", "value": 73.81358736777001}, {"type": "manhattan_pearson", "value": 72.58743833842829}, {"type": "manhattan_spearman", "value": 73.7590419009179}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MMarcoReranking", "type": "C-MTEB/Mmarco-reranking", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map", "value": 31.648613483640254}, {"type": "mrr", "value": 30.37420634920635}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MMarcoRetrieval", "type": "C-MTEB/MMarcoRetrieval", "config": "default", "split": "dev", "revision": "539bbde593d947e2a124ba72651aafc09eb33fc2"}, "metrics": [{"type": "map_at_1", "value": 73.28099999999999}, {"type": "map_at_10", "value": 81.977}, {"type": "map_at_100", "value": 82.222}, {"type": "map_at_1000", "value": 82.22699999999999}, {"type": "map_at_3", "value": 80.441}, {"type": "map_at_5", "value": 81.46600000000001}, {"type": "mrr_at_1", "value": 75.673}, {"type": "mrr_at_10", "value": 82.41000000000001}, {"type": "mrr_at_100", "value": 82.616}, {"type": "mrr_at_1000", "value": 82.621}, {"type": "mrr_at_3", "value": 81.094}, {"type": "mrr_at_5", "value": 81.962}, {"type": "ndcg_at_1", "value": 75.673}, {"type": "ndcg_at_10", "value": 85.15599999999999}, {"type": "ndcg_at_100", "value": 86.151}, {"type": "ndcg_at_1000", "value": 86.26899999999999}, {"type": "ndcg_at_3", "value": 82.304}, {"type": "ndcg_at_5", "value": 84.009}, {"type": "precision_at_1", "value": 75.673}, {"type": "precision_at_10", "value": 10.042}, {"type": "precision_at_100", "value": 1.052}, {"type": "precision_at_1000", "value": 0.106}, {"type": "precision_at_3", "value": 30.673000000000002}, {"type": "precision_at_5", "value": 19.326999999999998}, {"type": "recall_at_1", "value": 73.28099999999999}, {"type": "recall_at_10", "value": 94.446}, {"type": "recall_at_100", "value": 98.737}, {"type": "recall_at_1000", "value": 99.649}, {"type": "recall_at_3", "value": 86.984}, {"type": "recall_at_5", "value": 91.024}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-CN)", "type": "mteb/amazon_massive_intent", "config": "zh-CN", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 81.08607935440484}, {"type": "f1", "value": 78.24879986066307}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-CN)", "type": "mteb/amazon_massive_scenario", "config": "zh-CN", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 86.05917955615332}, {"type": "f1", "value": 85.05279279434997}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MedicalRetrieval", "type": "C-MTEB/MedicalRetrieval", "config": "default", "split": "dev", "revision": "2039188fb5800a9803ba5048df7b76e6fb151fc6"}, "metrics": [{"type": "map_at_1", "value": 56.2}, {"type": "map_at_10", "value": 62.57899999999999}, {"type": "map_at_100", "value": 63.154999999999994}, {"type": "map_at_1000", "value": 63.193}, {"type": "map_at_3", "value": 61.217}, {"type": "map_at_5", "value": 62.012}, {"type": "mrr_at_1", "value": 56.3}, {"type": "mrr_at_10", "value": 62.629000000000005}, {"type": "mrr_at_100", "value": 63.205999999999996}, {"type": "mrr_at_1000", "value": 63.244}, {"type": "mrr_at_3", "value": 61.267}, {"type": "mrr_at_5", "value": 62.062}, {"type": "ndcg_at_1", "value": 56.2}, {"type": "ndcg_at_10", "value": 65.592}, {"type": "ndcg_at_100", "value": 68.657}, {"type": "ndcg_at_1000", "value": 69.671}, {"type": "ndcg_at_3", "value": 62.808}, {"type": "ndcg_at_5", "value": 64.24499999999999}, {"type": "precision_at_1", "value": 56.2}, {"type": "precision_at_10", "value": 7.5}, {"type": "precision_at_100", "value": 0.899}, {"type": "precision_at_1000", "value": 0.098}, {"type": "precision_at_3", "value": 22.467000000000002}, {"type": "precision_at_5", "value": 14.180000000000001}, {"type": "recall_at_1", "value": 56.2}, {"type": "recall_at_10", "value": 75.0}, {"type": "recall_at_100", "value": 89.9}, {"type": "recall_at_1000", "value": 97.89999999999999}, {"type": "recall_at_3", "value": 67.4}, {"type": "recall_at_5", "value": 70.89999999999999}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MultilingualSentiment", "type": "C-MTEB/MultilingualSentiment-classification", "config": "default", "split": "validation", "revision": "46958b007a63fdbf239b7672c25d0bea67b5ea1a"}, "metrics": [{"type": "accuracy", "value": 76.87666666666667}, {"type": "f1", "value": 76.7317686219665}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB Ocnli", "type": "C-MTEB/OCNLI", "config": "default", "split": "validation", "revision": "66e76a618a34d6d565d5538088562851e6daa7ec"}, "metrics": [{"type": "cos_sim_accuracy", "value": 79.64266377910124}, {"type": "cos_sim_ap", "value": 84.78274442344829}, {"type": "cos_sim_f1", "value": 81.16947472745292}, {"type": "cos_sim_precision", "value": 76.47058823529412}, {"type": "cos_sim_recall", "value": 86.48363252375924}, {"type": "dot_accuracy", "value": 79.64266377910124}, {"type": "dot_ap", "value": 84.7851404063692}, {"type": "dot_f1", "value": 81.16947472745292}, {"type": "dot_precision", "value": 76.47058823529412}, {"type": "dot_recall", "value": 86.48363252375924}, {"type": "euclidean_accuracy", "value": 79.64266377910124}, {"type": "euclidean_ap", "value": 84.78068373762378}, {"type": "euclidean_f1", "value": 81.14794656110837}, {"type": "euclidean_precision", "value": 76.35009310986965}, {"type": "euclidean_recall", "value": 86.58922914466737}, {"type": "manhattan_accuracy", "value": 79.48023822414727}, {"type": "manhattan_ap", "value": 84.72928897427576}, {"type": "manhattan_f1", "value": 81.32084770823064}, {"type": "manhattan_precision", "value": 76.24768946395564}, {"type": "manhattan_recall", "value": 87.11721224920802}, {"type": "max_accuracy", "value": 79.64266377910124}, {"type": "max_ap", "value": 84.7851404063692}, {"type": "max_f1", "value": 81.32084770823064}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB OnlineShopping", "type": "C-MTEB/OnlineShopping-classification", "config": "default", "split": "test", "revision": "e610f2ebd179a8fda30ae534c3878750a96db120"}, "metrics": [{"type": "accuracy", "value": 94.3}, {"type": "ap", "value": 92.8664032274438}, {"type": "f1", "value": 94.29311102997727}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB PAWSX", "type": "C-MTEB/PAWSX", "config": "default", "split": "test", "revision": "9c6a90e430ac22b5779fb019a23e820b11a8b5e1"}, "metrics": [{"type": "cos_sim_pearson", "value": 48.51392279882909}, {"type": "cos_sim_spearman", "value": 54.06338895994974}, {"type": "euclidean_pearson", "value": 52.58480559573412}, {"type": "euclidean_spearman", "value": 54.06417276612201}, {"type": "manhattan_pearson", "value": 52.69525121721343}, {"type": "manhattan_spearman", "value": 54.048147455389675}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB QBQTC", "type": "C-MTEB/QBQTC", "config": "default", "split": "test", "revision": "790b0510dc52b1553e8c49f3d2afb48c0e5c48b7"}, "metrics": [{"type": "cos_sim_pearson", "value": 29.728387290757325}, {"type": "cos_sim_spearman", "value": 31.366121633635284}, {"type": "euclidean_pearson", "value": 29.14588368552961}, {"type": "euclidean_spearman", "value": 31.36764411112844}, {"type": "manhattan_pearson", "value": 29.63517350523121}, {"type": "manhattan_spearman", "value": 31.94157020583762}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh)", "type": "mteb/sts22-crosslingual-sts", "config": "zh", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 63.64868296271406}, {"type": "cos_sim_spearman", "value": 66.12800618164744}, {"type": "euclidean_pearson", "value": 63.21405767340238}, {"type": "euclidean_spearman", "value": 66.12786567790748}, {"type": "manhattan_pearson", "value": 64.04300276525848}, {"type": "manhattan_spearman", "value": 66.5066857145652}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSB", "type": "C-MTEB/STSB", "config": "default", "split": "test", "revision": "0cde68302b3541bb8b3c340dc0644b0b745b3dc0"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.2302623912794}, {"type": "cos_sim_spearman", "value": 81.16833673266562}, {"type": "euclidean_pearson", "value": 79.47647843876024}, {"type": "euclidean_spearman", "value": 81.16944349524972}, {"type": "manhattan_pearson", "value": 79.84947238492208}, {"type": "manhattan_spearman", "value": 81.64626599410026}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB T2Reranking", "type": "C-MTEB/T2Reranking", "config": "default", "split": "dev", "revision": "76631901a18387f85eaa53e5450019b87ad58ef9"}, "metrics": [{"type": "map", "value": 67.80129586475687}, {"type": "mrr", "value": 77.77402311635554}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB T2Retrieval", "type": "C-MTEB/T2Retrieval", "config": "default", "split": "dev", "revision": "8731a845f1bf500a4f111cf1070785c793d10e64"}, "metrics": [{"type": "map_at_1", "value": 28.666999999999998}, {"type": "map_at_10", "value": 81.063}, {"type": "map_at_100", "value": 84.504}, {"type": "map_at_1000", "value": 84.552}, {"type": "map_at_3", "value": 56.897}, {"type": "map_at_5", "value": 70.073}, {"type": "mrr_at_1", "value": 92.087}, {"type": "mrr_at_10", "value": 94.132}, {"type": "mrr_at_100", "value": 94.19800000000001}, {"type": "mrr_at_1000", "value": 94.19999999999999}, {"type": "mrr_at_3", "value": 93.78999999999999}, {"type": "mrr_at_5", "value": 94.002}, {"type": "ndcg_at_1", "value": 92.087}, {"type": "ndcg_at_10", "value": 87.734}, {"type": "ndcg_at_100", "value": 90.736}, {"type": "ndcg_at_1000", "value": 91.184}, {"type": "ndcg_at_3", "value": 88.78}, {"type": "ndcg_at_5", "value": 87.676}, {"type": "precision_at_1", "value": 92.087}, {"type": "precision_at_10", "value": 43.46}, {"type": "precision_at_100", "value": 5.07}, {"type": "precision_at_1000", "value": 0.518}, {"type": "precision_at_3", "value": 77.49000000000001}, {"type": "precision_at_5", "value": 65.194}, {"type": "recall_at_1", "value": 28.666999999999998}, {"type": "recall_at_10", "value": 86.632}, {"type": "recall_at_100", "value": 96.646}, {"type": "recall_at_1000", "value": 98.917}, {"type": "recall_at_3", "value": 58.333999999999996}, {"type": "recall_at_5", "value": 72.974}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TNews", "type": "C-MTEB/TNews-classification", "config": "default", "split": "validation", "revision": "317f262bf1e6126357bbe89e875451e4b0938fe4"}, "metrics": [{"type": "accuracy", "value": 52.971999999999994}, {"type": "f1", "value": 50.2898280984929}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ThuNewsClusteringP2P", "type": "C-MTEB/ThuNewsClusteringP2P", "config": "default", "split": "test", "revision": "5798586b105c0434e4f0fe5e767abe619442cf93"}, "metrics": [{"type": "v_measure", "value": 86.0797948663824}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ThuNewsClusteringS2S", "type": "C-MTEB/ThuNewsClusteringS2S", "config": "default", "split": "test", "revision": "8a8b2caeda43f39e13c4bc5bea0f8a667896e10d"}, "metrics": [{"type": "v_measure", "value": 85.10759092255017}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB VideoRetrieval", "type": "C-MTEB/VideoRetrieval", "config": "default", "split": "dev", "revision": "58c2597a5943a2ba48f4668c3b90d796283c5639"}, "metrics": [{"type": "map_at_1", "value": 65.60000000000001}, {"type": "map_at_10", "value": 74.773}, {"type": "map_at_100", "value": 75.128}, {"type": "map_at_1000", "value": 75.136}, {"type": "map_at_3", "value": 73.05}, {"type": "map_at_5", "value": 74.13499999999999}, {"type": "mrr_at_1", "value": 65.60000000000001}, {"type": "mrr_at_10", "value": 74.773}, {"type": "mrr_at_100", "value": 75.128}, {"type": "mrr_at_1000", "value": 75.136}, {"type": "mrr_at_3", "value": 73.05}, {"type": "mrr_at_5", "value": 74.13499999999999}, {"type": "ndcg_at_1", "value": 65.60000000000001}, {"type": "ndcg_at_10", "value": 78.84299999999999}, {"type": "ndcg_at_100", "value": 80.40899999999999}, {"type": "ndcg_at_1000", "value": 80.57}, {"type": "ndcg_at_3", "value": 75.40599999999999}, {"type": "ndcg_at_5", "value": 77.351}, {"type": "precision_at_1", "value": 65.60000000000001}, {"type": "precision_at_10", "value": 9.139999999999999}, {"type": "precision_at_100", "value": 0.984}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 27.400000000000002}, {"type": "precision_at_5", "value": 17.380000000000003}, {"type": "recall_at_1", "value": 65.60000000000001}, {"type": "recall_at_10", "value": 91.4}, {"type": "recall_at_100", "value": 98.4}, {"type": "recall_at_1000", "value": 99.6}, {"type": "recall_at_3", "value": 82.19999999999999}, {"type": "recall_at_5", "value": 86.9}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Waimai", "type": "C-MTEB/waimai-classification", "config": "default", "split": "test", "revision": "339287def212450dcaa9df8c22bf93e9980c7023"}, "metrics": [{"type": "accuracy", "value": 89.47}, {"type": "ap", "value": 75.59561751845389}, {"type": "f1", "value": 87.95207751382563}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB AlloProfClusteringP2P", "type": "lyon-nlp/alloprof", "config": "default", "split": "test", "revision": "392ba3f5bcc8c51f578786c1fc3dae648662cb9b"}, "metrics": [{"type": "v_measure", "value": 76.05592323841036}, {"type": "v_measure", "value": 64.51718058866508}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AlloprofReranking", "type": "lyon-nlp/mteb-fr-reranking-alloprof-s2p", "config": "default", "split": "test", "revision": "666fdacebe0291776e86f29345663dfaf80a0db9"}, "metrics": [{"type": "map", "value": 73.08278490943373}, {"type": "mrr", "value": 74.66561454570449}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB AlloprofRetrieval", "type": "lyon-nlp/alloprof", "config": "default", "split": "test", "revision": "392ba3f5bcc8c51f578786c1fc3dae648662cb9b"}, "metrics": [{"type": "map_at_1", "value": 38.912}, {"type": "map_at_10", "value": 52.437999999999995}, {"type": "map_at_100", "value": 53.38}, {"type": "map_at_1000", "value": 53.427}, {"type": "map_at_3", "value": 48.879}, {"type": "map_at_5", "value": 50.934000000000005}, {"type": "mrr_at_1", "value": 44.085}, {"type": "mrr_at_10", "value": 55.337}, {"type": "mrr_at_100", "value": 56.016999999999996}, {"type": "mrr_at_1000", "value": 56.043}, {"type": "mrr_at_3", "value": 52.55499999999999}, {"type": "mrr_at_5", "value": 54.20399999999999}, {"type": "ndcg_at_1", "value": 44.085}, {"type": "ndcg_at_10", "value": 58.876}, {"type": "ndcg_at_100", "value": 62.714000000000006}, {"type": "ndcg_at_1000", "value": 63.721000000000004}, {"type": "ndcg_at_3", "value": 52.444}, {"type": "ndcg_at_5", "value": 55.692}, {"type": "precision_at_1", "value": 44.085}, {"type": "precision_at_10", "value": 9.21}, {"type": "precision_at_100", "value": 1.164}, {"type": "precision_at_1000", "value": 0.128}, {"type": "precision_at_3", "value": 23.043}, {"type": "precision_at_5", "value": 15.898000000000001}, {"type": "recall_at_1", "value": 38.912}, {"type": "recall_at_10", "value": 75.577}, {"type": "recall_at_100", "value": 92.038}, {"type": "recall_at_1000", "value": 99.325}, {"type": "recall_at_3", "value": 58.592}, {"type": "recall_at_5", "value": 66.235}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 55.532000000000004}, {"type": "f1", "value": 52.5783943471605}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB BSARDRetrieval", "type": "maastrichtlawtech/bsard", "config": "default", "split": "test", "revision": "5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59"}, "metrics": [{"type": "map_at_1", "value": 8.108}, {"type": "map_at_10", "value": 14.710999999999999}, {"type": "map_at_100", "value": 15.891}, {"type": "map_at_1000", "value": 15.983}, {"type": "map_at_3", "value": 12.237}, {"type": "map_at_5", "value": 13.679}, {"type": "mrr_at_1", "value": 8.108}, {"type": "mrr_at_10", "value": 14.710999999999999}, {"type": "mrr_at_100", "value": 15.891}, {"type": "mrr_at_1000", "value": 15.983}, {"type": "mrr_at_3", "value": 12.237}, {"type": "mrr_at_5", "value": 13.679}, {"type": "ndcg_at_1", "value": 8.108}, {"type": "ndcg_at_10", "value": 18.796}, {"type": "ndcg_at_100", "value": 25.098}, {"type": "ndcg_at_1000", "value": 27.951999999999998}, {"type": "ndcg_at_3", "value": 13.712}, {"type": "ndcg_at_5", "value": 16.309}, {"type": "precision_at_1", "value": 8.108}, {"type": "precision_at_10", "value": 3.198}, {"type": "precision_at_100", "value": 0.626}, {"type": "precision_at_1000", "value": 0.086}, {"type": "precision_at_3", "value": 6.006}, {"type": "precision_at_5", "value": 4.865}, {"type": "recall_at_1", "value": 8.108}, {"type": "recall_at_10", "value": 31.982}, {"type": "recall_at_100", "value": 62.613}, {"type": "recall_at_1000", "value": 86.036}, {"type": "recall_at_3", "value": 18.018}, {"type": "recall_at_5", "value": 24.324}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB HALClusteringS2S", "type": "lyon-nlp/clustering-hal-s2s", "config": "default", "split": "test", "revision": "e06ebbbb123f8144bef1a5d18796f3dec9ae2915"}, "metrics": [{"type": "v_measure", "value": 30.833269778867116}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MLSUMClusteringP2P", "type": "mlsum", "config": "default", "split": "test", "revision": "b5d54f8f3b61ae17845046286940f03c6bc79bc7"}, "metrics": [{"type": "v_measure", "value": 50.0281928004713}, {"type": "v_measure", "value": 43.699961510636534}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 96.68963357344191}, {"type": "f1", "value": 96.45175170820961}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 87.46946445349202}, {"type": "f1", "value": 65.79860440988624}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MasakhaNEWSClassification (fra)", "type": "masakhane/masakhanews", "config": "fra", "split": "test", "revision": "8ccc72e69e65f40c70e117d8b3c08306bb788b60"}, "metrics": [{"type": "accuracy", "value": 82.60663507109005}, {"type": "f1", "value": 77.20462646604777}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MasakhaNEWSClusteringP2P (fra)", "type": "masakhane/masakhanews", "config": "fra", "split": "test", "revision": "8ccc72e69e65f40c70e117d8b3c08306bb788b60"}, "metrics": [{"type": "v_measure", "value": 60.19311264967803}, {"type": "v_measure", "value": 63.6235764409785}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fr)", "type": "mteb/amazon_massive_intent", "config": "fr", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 81.65097511768661}, {"type": "f1", "value": 78.77796091490924}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fr)", "type": "mteb/amazon_massive_scenario", "config": "fr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 86.64425016812373}, {"type": "f1", "value": 85.4912728670017}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MintakaRetrieval (fr)", "type": "jinaai/mintakaqa", "config": "fr", "split": "test", "revision": "efa78cc2f74bbcd21eff2261f9e13aebe40b814e"}, "metrics": [{"type": "map_at_1", "value": 35.913000000000004}, {"type": "map_at_10", "value": 48.147}, {"type": "map_at_100", "value": 48.91}, {"type": "map_at_1000", "value": 48.949}, {"type": "map_at_3", "value": 45.269999999999996}, {"type": "map_at_5", "value": 47.115}, {"type": "mrr_at_1", "value": 35.913000000000004}, {"type": "mrr_at_10", "value": 48.147}, {"type": "mrr_at_100", "value": 48.91}, {"type": "mrr_at_1000", "value": 48.949}, {"type": "mrr_at_3", "value": 45.269999999999996}, {"type": "mrr_at_5", "value": 47.115}, {"type": "ndcg_at_1", "value": 35.913000000000004}, {"type": "ndcg_at_10", "value": 54.03}, {"type": "ndcg_at_100", "value": 57.839}, {"type": "ndcg_at_1000", "value": 58.925000000000004}, {"type": "ndcg_at_3", "value": 48.217999999999996}, {"type": "ndcg_at_5", "value": 51.56699999999999}, {"type": "precision_at_1", "value": 35.913000000000004}, {"type": "precision_at_10", "value": 7.244000000000001}, {"type": "precision_at_100", "value": 0.9039999999999999}, {"type": "precision_at_1000", "value": 0.099}, {"type": "precision_at_3", "value": 18.905}, {"type": "precision_at_5", "value": 12.981000000000002}, {"type": "recall_at_1", "value": 35.913000000000004}, {"type": "recall_at_10", "value": 72.441}, {"type": "recall_at_100", "value": 90.41799999999999}, {"type": "recall_at_1000", "value": 99.099}, {"type": "recall_at_3", "value": 56.716}, {"type": "recall_at_5", "value": 64.90599999999999}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB OpusparcusPC (fr)", "type": "GEM/opusparcus", "config": "fr", "split": "test", "revision": "9e9b1f8ef51616073f47f306f7f47dd91663f86a"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.90069513406156}, {"type": "cos_sim_ap", "value": 100.0}, {"type": "cos_sim_f1", "value": 99.95032290114257}, {"type": "cos_sim_precision", "value": 100.0}, {"type": "cos_sim_recall", "value": 99.90069513406156}, {"type": "dot_accuracy", "value": 99.90069513406156}, {"type": "dot_ap", "value": 100.0}, {"type": "dot_f1", "value": 99.95032290114257}, {"type": "dot_precision", "value": 100.0}, {"type": "dot_recall", "value": 99.90069513406156}, {"type": "euclidean_accuracy", "value": 99.90069513406156}, {"type": "euclidean_ap", "value": 100.0}, {"type": "euclidean_f1", "value": 99.95032290114257}, {"type": "euclidean_precision", "value": 100.0}, {"type": "euclidean_recall", "value": 99.90069513406156}, {"type": "manhattan_accuracy", "value": 99.90069513406156}, {"type": "manhattan_ap", "value": 100.0}, {"type": "manhattan_f1", "value": 99.95032290114257}, {"type": "manhattan_precision", "value": 100.0}, {"type": "manhattan_recall", "value": 99.90069513406156}, {"type": "max_accuracy", "value": 99.90069513406156}, {"type": "max_ap", "value": 100.0}, {"type": "max_f1", "value": 99.95032290114257}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB PawsX (fr)", "type": "paws-x", "config": "fr", "split": "test", "revision": "8a04d940a42cd40658986fdd8e3da561533a3646"}, "metrics": [{"type": "cos_sim_accuracy", "value": 75.25}, {"type": "cos_sim_ap", "value": 80.86376001270014}, {"type": "cos_sim_f1", "value": 73.65945437441204}, {"type": "cos_sim_precision", "value": 64.02289452166802}, {"type": "cos_sim_recall", "value": 86.71096345514951}, {"type": "dot_accuracy", "value": 75.25}, {"type": "dot_ap", "value": 80.93686107633002}, {"type": "dot_f1", "value": 73.65945437441204}, {"type": "dot_precision", "value": 64.02289452166802}, {"type": "dot_recall", "value": 86.71096345514951}, {"type": "euclidean_accuracy", "value": 75.25}, {"type": "euclidean_ap", "value": 80.86379136218862}, {"type": "euclidean_f1", "value": 73.65945437441204}, {"type": "euclidean_precision", "value": 64.02289452166802}, {"type": "euclidean_recall", "value": 86.71096345514951}, {"type": "manhattan_accuracy", "value": 75.3}, {"type": "manhattan_ap", "value": 80.87826606097734}, {"type": "manhattan_f1", "value": 73.68421052631581}, {"type": "manhattan_precision", "value": 64.0}, {"type": "manhattan_recall", "value": 86.82170542635659}, {"type": "max_accuracy", "value": 75.3}, {"type": "max_ap", "value": 80.93686107633002}, {"type": "max_f1", "value": 73.68421052631581}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICKFr", "type": "Lajavaness/SICK-fr", "config": "default", "split": "test", "revision": "e077ab4cf4774a1e36d86d593b150422fafd8e8a"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.42349425981143}, {"type": "cos_sim_spearman", "value": 78.90454327031226}, {"type": "euclidean_pearson", "value": 78.39086497435166}, {"type": "euclidean_spearman", "value": 78.9046133980509}, {"type": "manhattan_pearson", "value": 78.63743094286502}, {"type": "manhattan_spearman", "value": 79.12136348449269}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr)", "type": "mteb/sts22-crosslingual-sts", "config": "fr", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.452697919749}, {"type": "cos_sim_spearman", "value": 82.58116836039301}, {"type": "euclidean_pearson", "value": 81.04038478932786}, {"type": "euclidean_spearman", "value": 82.58116836039301}, {"type": "manhattan_pearson", "value": 81.37075396187771}, {"type": "manhattan_spearman", "value": 82.73678231355368}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (fr)", "type": "stsb_multi_mt", "config": "fr", "split": "test", "revision": "93d57ef91790589e3ce9c365164337a8a78b7632"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.7419764013806}, {"type": "cos_sim_spearman", "value": 85.46085808849622}, {"type": "euclidean_pearson", "value": 83.70449639870063}, {"type": "euclidean_spearman", "value": 85.46159013076233}, {"type": "manhattan_pearson", "value": 83.95259510313929}, {"type": "manhattan_spearman", "value": 85.8029724659458}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEvalFr", "type": "lyon-nlp/summarization-summeval-fr-p2p", "config": "default", "split": "test", "revision": "b385812de6a9577b6f4d0f88c6a6e35395a94054"}, "metrics": [{"type": "cos_sim_pearson", "value": 32.61063271753325}, {"type": "cos_sim_spearman", "value": 31.454589417353603}, {"type": "dot_pearson", "value": 32.6106288643431}, {"type": "dot_spearman", "value": 31.454589417353603}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SyntecReranking", "type": "lyon-nlp/mteb-fr-reranking-syntec-s2p", "config": "default", "split": "test", "revision": "b205c5084a0934ce8af14338bf03feb19499c84d"}, "metrics": [{"type": "map", "value": 84.31666666666666}, {"type": "mrr", "value": 84.31666666666666}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SyntecRetrieval", "type": "lyon-nlp/mteb-fr-retrieval-syntec-s2p", "config": "default", "split": "test", "revision": "77f7e271bf4a92b24fce5119f3486b583ca016ff"}, "metrics": [{"type": "map_at_1", "value": 63.0}, {"type": "map_at_10", "value": 73.471}, {"type": "map_at_100", "value": 73.87}, {"type": "map_at_1000", "value": 73.87}, {"type": "map_at_3", "value": 70.5}, {"type": "map_at_5", "value": 73.05}, {"type": "mrr_at_1", "value": 63.0}, {"type": "mrr_at_10", "value": 73.471}, {"type": "mrr_at_100", "value": 73.87}, {"type": "mrr_at_1000", "value": 73.87}, {"type": "mrr_at_3", "value": 70.5}, {"type": "mrr_at_5", "value": 73.05}, {"type": "ndcg_at_1", "value": 63.0}, {"type": "ndcg_at_10", "value": 78.255}, {"type": "ndcg_at_100", "value": 79.88}, {"type": "ndcg_at_1000", "value": 79.88}, {"type": "ndcg_at_3", "value": 72.702}, {"type": "ndcg_at_5", "value": 77.264}, {"type": "precision_at_1", "value": 63.0}, {"type": "precision_at_10", "value": 9.3}, {"type": "precision_at_100", "value": 1.0}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 26.333000000000002}, {"type": "precision_at_5", "value": 18.0}, {"type": "recall_at_1", "value": 63.0}, {"type": "recall_at_10", "value": 93.0}, {"type": "recall_at_100", "value": 100.0}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 79.0}, {"type": "recall_at_5", "value": 90.0}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB XPQARetrieval (fr)", "type": "jinaai/xpqa", "config": "fr", "split": "test", "revision": "c99d599f0a6ab9b85b065da6f9d94f9cf731679f"}, "metrics": [{"type": "map_at_1", "value": 40.338}, {"type": "map_at_10", "value": 61.927}, {"type": "map_at_100", "value": 63.361999999999995}, {"type": "map_at_1000", "value": 63.405}, {"type": "map_at_3", "value": 55.479}, {"type": "map_at_5", "value": 59.732}, {"type": "mrr_at_1", "value": 63.551}, {"type": "mrr_at_10", "value": 71.006}, {"type": "mrr_at_100", "value": 71.501}, {"type": "mrr_at_1000", "value": 71.509}, {"type": "mrr_at_3", "value": 69.07}, {"type": "mrr_at_5", "value": 70.165}, {"type": "ndcg_at_1", "value": 63.551}, {"type": "ndcg_at_10", "value": 68.297}, {"type": "ndcg_at_100", "value": 73.13199999999999}, {"type": "ndcg_at_1000", "value": 73.751}, {"type": "ndcg_at_3", "value": 62.999}, {"type": "ndcg_at_5", "value": 64.89}, {"type": "precision_at_1", "value": 63.551}, {"type": "precision_at_10", "value": 15.661}, {"type": "precision_at_100", "value": 1.9789999999999999}, {"type": "precision_at_1000", "value": 0.207}, {"type": "precision_at_3", "value": 38.273}, {"type": "precision_at_5", "value": 27.61}, {"type": "recall_at_1", "value": 40.338}, {"type": "recall_at_10", "value": 77.267}, {"type": "recall_at_100", "value": 95.892}, {"type": "recall_at_1000", "value": 99.75500000000001}, {"type": "recall_at_3", "value": 60.36}, {"type": "recall_at_5", "value": 68.825}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB 8TagsClustering", "type": "PL-MTEB/8tags-clustering", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "v_measure", "value": 51.36126303874126}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AllegroReviews", "type": "PL-MTEB/allegro-reviews", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 67.13717693836979}, {"type": "f1", "value": 57.27609848003782}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna-PL", "type": "clarin-knext/arguana-pl", "config": "default", "split": "test", "revision": "63fc86750af76253e8c760fc9e534bbf24d260a2"}, "metrics": [{"type": "map_at_1", "value": 35.276999999999994}, {"type": "map_at_10", "value": 51.086}, {"type": "map_at_100", "value": 51.788000000000004}, {"type": "map_at_1000", "value": 51.791}, {"type": "map_at_3", "value": 46.147}, {"type": "map_at_5", "value": 49.078}, {"type": "mrr_at_1", "value": 35.917}, {"type": "mrr_at_10", "value": 51.315999999999995}, {"type": "mrr_at_100", "value": 52.018}, {"type": "mrr_at_1000", "value": 52.022}, {"type": "mrr_at_3", "value": 46.349000000000004}, {"type": "mrr_at_5", "value": 49.297000000000004}, {"type": "ndcg_at_1", "value": 35.276999999999994}, {"type": "ndcg_at_10", "value": 59.870999999999995}, {"type": "ndcg_at_100", "value": 62.590999999999994}, {"type": "ndcg_at_1000", "value": 62.661}, {"type": "ndcg_at_3", "value": 49.745}, {"type": "ndcg_at_5", "value": 55.067}, {"type": "precision_at_1", "value": 35.276999999999994}, {"type": "precision_at_10", "value": 8.791}, {"type": "precision_at_100", "value": 0.991}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 20.057}, {"type": "precision_at_5", "value": 14.637}, {"type": "recall_at_1", "value": 35.276999999999994}, {"type": "recall_at_10", "value": 87.909}, {"type": "recall_at_100", "value": 99.14699999999999}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 60.171}, {"type": "recall_at_5", "value": 73.18599999999999}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB CBD", "type": "PL-MTEB/cbd", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 78.03000000000002}, {"type": "ap", "value": 29.12548553897622}, {"type": "f1", "value": 66.54857118886073}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB CDSC-E", "type": "PL-MTEB/cdsce-pairclassification", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.0}, {"type": "cos_sim_ap", "value": 76.75437826834582}, {"type": "cos_sim_f1", "value": 66.4850136239782}, {"type": "cos_sim_precision", "value": 68.92655367231639}, {"type": "cos_sim_recall", "value": 64.21052631578948}, {"type": "dot_accuracy", "value": 89.0}, {"type": "dot_ap", "value": 76.75437826834582}, {"type": "dot_f1", "value": 66.4850136239782}, {"type": "dot_precision", "value": 68.92655367231639}, {"type": "dot_recall", "value": 64.21052631578948}, {"type": "euclidean_accuracy", "value": 89.0}, {"type": "euclidean_ap", "value": 76.75437826834582}, {"type": "euclidean_f1", "value": 66.4850136239782}, {"type": "euclidean_precision", "value": 68.92655367231639}, {"type": "euclidean_recall", "value": 64.21052631578948}, {"type": "manhattan_accuracy", "value": 89.0}, {"type": "manhattan_ap", "value": 76.66074220647083}, {"type": "manhattan_f1", "value": 66.47058823529412}, {"type": "manhattan_precision", "value": 75.33333333333333}, {"type": "manhattan_recall", "value": 59.473684210526315}, {"type": "max_accuracy", "value": 89.0}, {"type": "max_ap", "value": 76.75437826834582}, {"type": "max_f1", "value": 66.4850136239782}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB CDSC-R", "type": "PL-MTEB/cdscr-sts", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_pearson", "value": 93.12903172428328}, {"type": "cos_sim_spearman", "value": 92.66381487060741}, {"type": "euclidean_pearson", "value": 90.37278396708922}, {"type": "euclidean_spearman", "value": 92.66381487060741}, {"type": "manhattan_pearson", "value": 90.32503296540962}, {"type": "manhattan_spearman", "value": 92.6902938354313}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia-PL", "type": "clarin-knext/dbpedia-pl", "config": "default", "split": "test", "revision": "76afe41d9af165cc40999fcaa92312b8b012064a"}, "metrics": [{"type": "map_at_1", "value": 8.83}, {"type": "map_at_10", "value": 18.326}, {"type": "map_at_100", "value": 26.496}, {"type": "map_at_1000", "value": 28.455000000000002}, {"type": "map_at_3", "value": 12.933}, {"type": "map_at_5", "value": 15.168000000000001}, {"type": "mrr_at_1", "value": 66.0}, {"type": "mrr_at_10", "value": 72.76700000000001}, {"type": "mrr_at_100", "value": 73.203}, {"type": "mrr_at_1000", "value": 73.219}, {"type": "mrr_at_3", "value": 71.458}, {"type": "mrr_at_5", "value": 72.246}, {"type": "ndcg_at_1", "value": 55.375}, {"type": "ndcg_at_10", "value": 41.3}, {"type": "ndcg_at_100", "value": 45.891}, {"type": "ndcg_at_1000", "value": 52.905}, {"type": "ndcg_at_3", "value": 46.472}, {"type": "ndcg_at_5", "value": 43.734}, {"type": "precision_at_1", "value": 66.0}, {"type": "precision_at_10", "value": 33.074999999999996}, {"type": "precision_at_100", "value": 11.094999999999999}, {"type": "precision_at_1000", "value": 2.374}, {"type": "precision_at_3", "value": 48.583}, {"type": "precision_at_5", "value": 42.0}, {"type": "recall_at_1", "value": 8.83}, {"type": "recall_at_10", "value": 22.587}, {"type": "recall_at_100", "value": 50.61600000000001}, {"type": "recall_at_1000", "value": 73.559}, {"type": "recall_at_3", "value": 13.688}, {"type": "recall_at_5", "value": 16.855}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA-PL", "type": "clarin-knext/fiqa-pl", "config": "default", "split": "test", "revision": "2e535829717f8bf9dc829b7f911cc5bbd4e6608e"}, "metrics": [{"type": "map_at_1", "value": 20.587}, {"type": "map_at_10", "value": 33.095}, {"type": "map_at_100", "value": 35.24}, {"type": "map_at_1000", "value": 35.429}, {"type": "map_at_3", "value": 28.626}, {"type": "map_at_5", "value": 31.136999999999997}, {"type": "mrr_at_1", "value": 40.586}, {"type": "mrr_at_10", "value": 49.033}, {"type": "mrr_at_100", "value": 49.952999999999996}, {"type": "mrr_at_1000", "value": 49.992}, {"type": "mrr_at_3", "value": 46.553}, {"type": "mrr_at_5", "value": 48.035}, {"type": "ndcg_at_1", "value": 40.586}, {"type": "ndcg_at_10", "value": 41.046}, {"type": "ndcg_at_100", "value": 48.586}, {"type": "ndcg_at_1000", "value": 51.634}, {"type": "ndcg_at_3", "value": 36.773}, {"type": "ndcg_at_5", "value": 38.389}, {"type": "precision_at_1", "value": 40.586}, {"type": "precision_at_10", "value": 11.466}, {"type": "precision_at_100", "value": 1.909}, {"type": "precision_at_1000", "value": 0.245}, {"type": "precision_at_3", "value": 24.434}, {"type": "precision_at_5", "value": 18.426000000000002}, {"type": "recall_at_1", "value": 20.587}, {"type": "recall_at_10", "value": 47.986000000000004}, {"type": "recall_at_100", "value": 75.761}, {"type": "recall_at_1000", "value": 94.065}, {"type": "recall_at_3", "value": 33.339}, {"type": "recall_at_5", "value": 39.765}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA-PL", "type": "clarin-knext/hotpotqa-pl", "config": "default", "split": "test", "revision": "a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907"}, "metrics": [{"type": "map_at_1", "value": 40.878}, {"type": "map_at_10", "value": 58.775999999999996}, {"type": "map_at_100", "value": 59.632}, {"type": "map_at_1000", "value": 59.707}, {"type": "map_at_3", "value": 56.074}, {"type": "map_at_5", "value": 57.629}, {"type": "mrr_at_1", "value": 81.756}, {"type": "mrr_at_10", "value": 86.117}, {"type": "mrr_at_100", "value": 86.299}, {"type": "mrr_at_1000", "value": 86.30600000000001}, {"type": "mrr_at_3", "value": 85.345}, {"type": "mrr_at_5", "value": 85.832}, {"type": "ndcg_at_1", "value": 81.756}, {"type": "ndcg_at_10", "value": 67.608}, {"type": "ndcg_at_100", "value": 70.575}, {"type": "ndcg_at_1000", "value": 71.99600000000001}, {"type": "ndcg_at_3", "value": 63.723}, {"type": "ndcg_at_5", "value": 65.70700000000001}, {"type": "precision_at_1", "value": 81.756}, {"type": "precision_at_10", "value": 13.619}, {"type": "precision_at_100", "value": 1.5939999999999999}, {"type": "precision_at_1000", "value": 0.178}, {"type": "precision_at_3", "value": 39.604}, {"type": "precision_at_5", "value": 25.332}, {"type": "recall_at_1", "value": 40.878}, {"type": "recall_at_10", "value": 68.096}, {"type": "recall_at_100", "value": 79.696}, {"type": "recall_at_1000", "value": 89.082}, {"type": "recall_at_3", "value": 59.406000000000006}, {"type": "recall_at_5", "value": 63.329}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO-PL", "type": "clarin-knext/msmarco-pl", "config": "default", "split": "test", "revision": "8634c07806d5cce3a6138e260e59b81760a0a640"}, "metrics": [{"type": "map_at_1", "value": 2.1839999999999997}, {"type": "map_at_10", "value": 11.346}, {"type": "map_at_100", "value": 30.325000000000003}, {"type": "map_at_1000", "value": 37.806}, {"type": "map_at_3", "value": 4.842}, {"type": "map_at_5", "value": 6.891}, {"type": "mrr_at_1", "value": 86.047}, {"type": "mrr_at_10", "value": 89.14699999999999}, {"type": "mrr_at_100", "value": 89.46600000000001}, {"type": "mrr_at_1000", "value": 89.46600000000001}, {"type": "mrr_at_3", "value": 89.14699999999999}, {"type": "mrr_at_5", "value": 89.14699999999999}, {"type": "ndcg_at_1", "value": 67.829}, {"type": "ndcg_at_10", "value": 62.222}, {"type": "ndcg_at_100", "value": 55.337}, {"type": "ndcg_at_1000", "value": 64.076}, {"type": "ndcg_at_3", "value": 68.12700000000001}, {"type": "ndcg_at_5", "value": 64.987}, {"type": "precision_at_1", "value": 86.047}, {"type": "precision_at_10", "value": 69.535}, {"type": "precision_at_100", "value": 32.93}, {"type": "precision_at_1000", "value": 6.6049999999999995}, {"type": "precision_at_3", "value": 79.845}, {"type": "precision_at_5", "value": 75.349}, {"type": "recall_at_1", "value": 2.1839999999999997}, {"type": "recall_at_10", "value": 12.866}, {"type": "recall_at_100", "value": 43.505}, {"type": "recall_at_1000", "value": 72.366}, {"type": "recall_at_3", "value": 4.947}, {"type": "recall_at_5", "value": 7.192}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pl)", "type": "mteb/amazon_massive_intent", "config": "pl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 80.75319435104238}, {"type": "f1", "value": 77.58961444860606}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pl)", "type": "mteb/amazon_massive_scenario", "config": "pl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 85.54472091459313}, {"type": "f1", "value": 84.29498563572106}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus-PL", "type": "clarin-knext/nfcorpus-pl", "config": "default", "split": "test", "revision": "9a6f9567fda928260afed2de480d79c98bf0bec0"}, "metrics": [{"type": "map_at_1", "value": 4.367}, {"type": "map_at_10", "value": 10.38}, {"type": "map_at_100", "value": 13.516}, {"type": "map_at_1000", "value": 14.982000000000001}, {"type": "map_at_3", "value": 7.367}, {"type": "map_at_5", "value": 8.59}, {"type": "mrr_at_1", "value": 41.486000000000004}, {"type": "mrr_at_10", "value": 48.886}, {"type": "mrr_at_100", "value": 49.657000000000004}, {"type": "mrr_at_1000", "value": 49.713}, {"type": "mrr_at_3", "value": 46.904}, {"type": "mrr_at_5", "value": 48.065000000000005}, {"type": "ndcg_at_1", "value": 40.402}, {"type": "ndcg_at_10", "value": 30.885}, {"type": "ndcg_at_100", "value": 28.393}, {"type": "ndcg_at_1000", "value": 37.428}, {"type": "ndcg_at_3", "value": 35.394999999999996}, {"type": "ndcg_at_5", "value": 33.391999999999996}, {"type": "precision_at_1", "value": 41.486000000000004}, {"type": "precision_at_10", "value": 23.437}, {"type": "precision_at_100", "value": 7.638}, {"type": "precision_at_1000", "value": 2.0389999999999997}, {"type": "precision_at_3", "value": 32.817}, {"type": "precision_at_5", "value": 28.915999999999997}, {"type": "recall_at_1", "value": 4.367}, {"type": "recall_at_10", "value": 14.655000000000001}, {"type": "recall_at_100", "value": 29.665999999999997}, {"type": "recall_at_1000", "value": 62.073}, {"type": "recall_at_3", "value": 8.51}, {"type": "recall_at_5", "value": 10.689}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ-PL", "type": "clarin-knext/nq-pl", "config": "default", "split": "test", "revision": "f171245712cf85dd4700b06bef18001578d0ca8d"}, "metrics": [{"type": "map_at_1", "value": 28.616000000000003}, {"type": "map_at_10", "value": 41.626000000000005}, {"type": "map_at_100", "value": 42.689}, {"type": "map_at_1000", "value": 42.733}, {"type": "map_at_3", "value": 37.729}, {"type": "map_at_5", "value": 39.879999999999995}, {"type": "mrr_at_1", "value": 32.068000000000005}, {"type": "mrr_at_10", "value": 44.029}, {"type": "mrr_at_100", "value": 44.87}, {"type": "mrr_at_1000", "value": 44.901}, {"type": "mrr_at_3", "value": 40.687}, {"type": "mrr_at_5", "value": 42.625}, {"type": "ndcg_at_1", "value": 32.068000000000005}, {"type": "ndcg_at_10", "value": 48.449999999999996}, {"type": "ndcg_at_100", "value": 53.13}, {"type": "ndcg_at_1000", "value": 54.186}, {"type": "ndcg_at_3", "value": 40.983999999999995}, {"type": "ndcg_at_5", "value": 44.628}, {"type": "precision_at_1", "value": 32.068000000000005}, {"type": "precision_at_10", "value": 7.9750000000000005}, {"type": "precision_at_100", "value": 1.061}, {"type": "precision_at_1000", "value": 0.116}, {"type": "precision_at_3", "value": 18.404999999999998}, {"type": "precision_at_5", "value": 13.111}, {"type": "recall_at_1", "value": 28.616000000000003}, {"type": "recall_at_10", "value": 66.956}, {"type": "recall_at_100", "value": 87.657}, {"type": "recall_at_1000", "value": 95.548}, {"type": "recall_at_3", "value": 47.453}, {"type": "recall_at_5", "value": 55.87800000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB PAC", "type": "laugustyniak/abusive-clauses-pl", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 69.04141326382856}, {"type": "ap", "value": 77.47589122111044}, {"type": "f1", "value": 66.6332277374775}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB PPC", "type": "PL-MTEB/ppc-pairclassification", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.4}, {"type": "cos_sim_ap", "value": 94.1044939667201}, {"type": "cos_sim_f1", "value": 88.78048780487805}, {"type": "cos_sim_precision", "value": 87.22044728434504}, {"type": "cos_sim_recall", "value": 90.39735099337747}, {"type": "dot_accuracy", "value": 86.4}, {"type": "dot_ap", "value": 94.1044939667201}, {"type": "dot_f1", "value": 88.78048780487805}, {"type": "dot_precision", "value": 87.22044728434504}, {"type": "dot_recall", "value": 90.39735099337747}, {"type": "euclidean_accuracy", "value": 86.4}, {"type": "euclidean_ap", "value": 94.1044939667201}, {"type": "euclidean_f1", "value": 88.78048780487805}, {"type": "euclidean_precision", "value": 87.22044728434504}, {"type": "euclidean_recall", "value": 90.39735099337747}, {"type": "manhattan_accuracy", "value": 86.4}, {"type": "manhattan_ap", "value": 94.11438365697387}, {"type": "manhattan_f1", "value": 88.77968877968877}, {"type": "manhattan_precision", "value": 87.84440842787681}, {"type": "manhattan_recall", "value": 89.73509933774835}, {"type": "max_accuracy", "value": 86.4}, {"type": "max_ap", "value": 94.11438365697387}, {"type": "max_f1", "value": 88.78048780487805}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB PSC", "type": "PL-MTEB/psc-pairclassification", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_accuracy", "value": 97.86641929499072}, {"type": "cos_sim_ap", "value": 99.36904211868182}, {"type": "cos_sim_f1", "value": 96.56203288490283}, {"type": "cos_sim_precision", "value": 94.72140762463343}, {"type": "cos_sim_recall", "value": 98.47560975609755}, {"type": "dot_accuracy", "value": 97.86641929499072}, {"type": "dot_ap", "value": 99.36904211868183}, {"type": "dot_f1", "value": 96.56203288490283}, {"type": "dot_precision", "value": 94.72140762463343}, {"type": "dot_recall", "value": 98.47560975609755}, {"type": "euclidean_accuracy", "value": 97.86641929499072}, {"type": "euclidean_ap", "value": 99.36904211868183}, {"type": "euclidean_f1", "value": 96.56203288490283}, {"type": "euclidean_precision", "value": 94.72140762463343}, {"type": "euclidean_recall", "value": 98.47560975609755}, {"type": "manhattan_accuracy", "value": 98.14471243042672}, {"type": "manhattan_ap", "value": 99.43359540492416}, {"type": "manhattan_f1", "value": 96.98795180722892}, {"type": "manhattan_precision", "value": 95.83333333333334}, {"type": "manhattan_recall", "value": 98.17073170731707}, {"type": "max_accuracy", "value": 98.14471243042672}, {"type": "max_ap", "value": 99.43359540492416}, {"type": "max_f1", "value": 96.98795180722892}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB PolEmo2.0-IN", "type": "PL-MTEB/polemo2_in", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 89.39058171745152}, {"type": "f1", "value": 86.8552093529568}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB PolEmo2.0-OUT", "type": "PL-MTEB/polemo2_out", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "accuracy", "value": 74.97975708502024}, {"type": "f1", "value": 58.73081628832407}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Quora-PL", "type": "clarin-knext/quora-pl", "config": "default", "split": "test", "revision": "0be27e93455051e531182b85e85e425aba12e9d4"}, "metrics": [{"type": "map_at_1", "value": 64.917}, {"type": "map_at_10", "value": 78.74600000000001}, {"type": "map_at_100", "value": 79.501}, {"type": "map_at_1000", "value": 79.524}, {"type": "map_at_3", "value": 75.549}, {"type": "map_at_5", "value": 77.495}, {"type": "mrr_at_1", "value": 74.9}, {"type": "mrr_at_10", "value": 82.112}, {"type": "mrr_at_100", "value": 82.314}, {"type": "mrr_at_1000", "value": 82.317}, {"type": "mrr_at_3", "value": 80.745}, {"type": "mrr_at_5", "value": 81.607}, {"type": "ndcg_at_1", "value": 74.83999999999999}, {"type": "ndcg_at_10", "value": 83.214}, {"type": "ndcg_at_100", "value": 84.997}, {"type": "ndcg_at_1000", "value": 85.207}, {"type": "ndcg_at_3", "value": 79.547}, {"type": "ndcg_at_5", "value": 81.46600000000001}, {"type": "precision_at_1", "value": 74.83999999999999}, {"type": "precision_at_10", "value": 12.822}, {"type": "precision_at_100", "value": 1.506}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 34.903}, {"type": "precision_at_5", "value": 23.16}, {"type": "recall_at_1", "value": 64.917}, {"type": "recall_at_10", "value": 92.27199999999999}, {"type": "recall_at_100", "value": 98.715}, {"type": "recall_at_1000", "value": 99.854}, {"type": "recall_at_3", "value": 82.04599999999999}, {"type": "recall_at_5", "value": 87.2}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS-PL", "type": "clarin-knext/scidocs-pl", "config": "default", "split": "test", "revision": "45452b03f05560207ef19149545f168e596c9337"}, "metrics": [{"type": "map_at_1", "value": 3.51}, {"type": "map_at_10", "value": 9.046999999999999}, {"type": "map_at_100", "value": 10.823}, {"type": "map_at_1000", "value": 11.144}, {"type": "map_at_3", "value": 6.257}, {"type": "map_at_5", "value": 7.648000000000001}, {"type": "mrr_at_1", "value": 17.299999999999997}, {"type": "mrr_at_10", "value": 27.419}, {"type": "mrr_at_100", "value": 28.618}, {"type": "mrr_at_1000", "value": 28.685}, {"type": "mrr_at_3", "value": 23.817}, {"type": "mrr_at_5", "value": 25.927}, {"type": "ndcg_at_1", "value": 17.299999999999997}, {"type": "ndcg_at_10", "value": 16.084}, {"type": "ndcg_at_100", "value": 23.729}, {"type": "ndcg_at_1000", "value": 29.476999999999997}, {"type": "ndcg_at_3", "value": 14.327000000000002}, {"type": "ndcg_at_5", "value": 13.017999999999999}, {"type": "precision_at_1", "value": 17.299999999999997}, {"type": "precision_at_10", "value": 8.63}, {"type": "precision_at_100", "value": 1.981}, {"type": "precision_at_1000", "value": 0.336}, {"type": "precision_at_3", "value": 13.4}, {"type": "precision_at_5", "value": 11.700000000000001}, {"type": "recall_at_1", "value": 3.51}, {"type": "recall_at_10", "value": 17.518}, {"type": "recall_at_100", "value": 40.275}, {"type": "recall_at_1000", "value": 68.203}, {"type": "recall_at_3", "value": 8.155}, {"type": "recall_at_5", "value": 11.875}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SICK-E-PL", "type": "PL-MTEB/sicke-pl-pairclassification", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.30248675091724}, {"type": "cos_sim_ap", "value": 83.6756734006714}, {"type": "cos_sim_f1", "value": 74.97367497367497}, {"type": "cos_sim_precision", "value": 73.91003460207612}, {"type": "cos_sim_recall", "value": 76.06837606837607}, {"type": "dot_accuracy", "value": 86.30248675091724}, {"type": "dot_ap", "value": 83.6756734006714}, {"type": "dot_f1", "value": 74.97367497367497}, {"type": "dot_precision", "value": 73.91003460207612}, {"type": "dot_recall", "value": 76.06837606837607}, {"type": "euclidean_accuracy", "value": 86.30248675091724}, {"type": "euclidean_ap", "value": 83.67566984333091}, {"type": "euclidean_f1", "value": 74.97367497367497}, {"type": "euclidean_precision", "value": 73.91003460207612}, {"type": "euclidean_recall", "value": 76.06837606837607}, {"type": "manhattan_accuracy", "value": 86.28210354667753}, {"type": "manhattan_ap", "value": 83.64216119130171}, {"type": "manhattan_f1", "value": 74.92152075340078}, {"type": "manhattan_precision", "value": 73.4107997265892}, {"type": "manhattan_recall", "value": 76.49572649572649}, {"type": "max_accuracy", "value": 86.30248675091724}, {"type": "max_ap", "value": 83.6756734006714}, {"type": "max_f1", "value": 74.97367497367497}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R-PL", "type": "PL-MTEB/sickr-pl-sts", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.23295940859121}, {"type": "cos_sim_spearman", "value": 78.89329160768719}, {"type": "euclidean_pearson", "value": 79.56019107076818}, {"type": "euclidean_spearman", "value": 78.89330209904084}, {"type": "manhattan_pearson", "value": 79.76098513973719}, {"type": "manhattan_spearman", "value": 79.05490162570123}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl)", "type": "mteb/sts22-crosslingual-sts", "config": "pl", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 37.732606308062486}, {"type": "cos_sim_spearman", "value": 41.01645667030284}, {"type": "euclidean_pearson", "value": 26.61722556367085}, {"type": "euclidean_spearman", "value": 41.01645667030284}, {"type": "manhattan_pearson", "value": 26.60917378970807}, {"type": "manhattan_spearman", "value": 41.51335727617614}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact-PL", "type": "clarin-knext/scifact-pl", "config": "default", "split": "test", "revision": "47932a35f045ef8ed01ba82bf9ff67f6e109207e"}, "metrics": [{"type": "map_at_1", "value": 54.31700000000001}, {"type": "map_at_10", "value": 65.564}, {"type": "map_at_100", "value": 66.062}, {"type": "map_at_1000", "value": 66.08699999999999}, {"type": "map_at_3", "value": 62.592999999999996}, {"type": "map_at_5", "value": 63.888}, {"type": "mrr_at_1", "value": 56.99999999999999}, {"type": "mrr_at_10", "value": 66.412}, {"type": "mrr_at_100", "value": 66.85900000000001}, {"type": "mrr_at_1000", "value": 66.88}, {"type": "mrr_at_3", "value": 64.22200000000001}, {"type": "mrr_at_5", "value": 65.206}, {"type": "ndcg_at_1", "value": 56.99999999999999}, {"type": "ndcg_at_10", "value": 70.577}, {"type": "ndcg_at_100", "value": 72.879}, {"type": "ndcg_at_1000", "value": 73.45}, {"type": "ndcg_at_3", "value": 65.5}, {"type": "ndcg_at_5", "value": 67.278}, {"type": "precision_at_1", "value": 56.99999999999999}, {"type": "precision_at_10", "value": 9.667}, {"type": "precision_at_100", "value": 1.083}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 26.0}, {"type": "precision_at_5", "value": 16.933}, {"type": "recall_at_1", "value": 54.31700000000001}, {"type": "recall_at_10", "value": 85.056}, {"type": "recall_at_100", "value": 95.667}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 71.0}, {"type": "recall_at_5", "value": 75.672}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID-PL", "type": "clarin-knext/trec-covid-pl", "config": "default", "split": "test", "revision": "81bcb408f33366c2a20ac54adafad1ae7e877fdd"}, "metrics": [{"type": "map_at_1", "value": 0.245}, {"type": "map_at_10", "value": 2.051}, {"type": "map_at_100", "value": 12.009}, {"type": "map_at_1000", "value": 27.448}, {"type": "map_at_3", "value": 0.721}, {"type": "map_at_5", "value": 1.13}, {"type": "mrr_at_1", "value": 88.0}, {"type": "mrr_at_10", "value": 93.0}, {"type": "mrr_at_100", "value": 93.0}, {"type": "mrr_at_1000", "value": 93.0}, {"type": "mrr_at_3", "value": 93.0}, {"type": "mrr_at_5", "value": 93.0}, {"type": "ndcg_at_1", "value": 85.0}, {"type": "ndcg_at_10", "value": 80.303}, {"type": "ndcg_at_100", "value": 61.23499999999999}, {"type": "ndcg_at_1000", "value": 52.978}, {"type": "ndcg_at_3", "value": 84.419}, {"type": "ndcg_at_5", "value": 82.976}, {"type": "precision_at_1", "value": 88.0}, {"type": "precision_at_10", "value": 83.39999999999999}, {"type": "precision_at_100", "value": 61.96}, {"type": "precision_at_1000", "value": 22.648}, {"type": "precision_at_3", "value": 89.333}, {"type": "precision_at_5", "value": 87.2}, {"type": "recall_at_1", "value": 0.245}, {"type": "recall_at_10", "value": 2.193}, {"type": "recall_at_100", "value": 14.938}, {"type": "recall_at_1000", "value": 48.563}, {"type": "recall_at_3", "value": 0.738}, {"type": "recall_at_5", "value": 1.173}]}]}]}
dataset
null
523
bhavnicksm/brown-beetle-base-v1
bhavnicksm
null
[ "model2vec", "safetensors", "embeddings", "static-embeddings", "sentence-transformers", "mteb", "en", "license:mit", "model-index", "region:us" ]
2025-01-22T20:17:06Z
2025-01-23T15:25:13+00:00
26
3
--- base_model: baai/bge-base-en-v1.5 language: - en library_name: model2vec license: mit tags: - embeddings - static-embeddings - sentence-transformers - mteb model-index: - name: brown-beetle-base-v1 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 70.90704647676162 - type: ap value: 20.809576527783648 - type: ap_weighted value: 20.809576527783648 - type: f1 value: 58.63593463335343 - type: f1_weighted value: 76.3522601923032 - type: main_score value: 70.90704647676162 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 72.44776119402985 - type: ap value: 35.37456318192898 - type: ap_weighted value: 35.37456318192898 - type: f1 value: 66.61214896368735 - type: f1_weighted value: 75.10012201186763 - type: main_score value: 72.44776119402985 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification (default) type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 66.56272500000001 - type: ap value: 61.65156042833797 - type: ap_weighted value: 61.65156042833797 - type: f1 value: 66.05067668571694 - type: f1_weighted value: 66.05067668571694 - type: main_score value: 66.56272500000001 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 32.804 - type: f1 value: 32.191562227975325 - type: f1_weighted value: 32.191562227975325 - type: main_score value: 32.804 - task: type: Retrieval dataset: name: MTEB ArguAna (default) type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: main_score value: 27.472 - type: map_at_1 value: 12.518 - type: map_at_10 value: 22.112000000000002 - type: map_at_100 value: 23.113 - type: map_at_1000 value: 23.194 - type: map_at_20 value: 22.689 - type: map_at_3 value: 19.262999999999998 - type: map_at_5 value: 20.838 - type: mrr_at_1 value: 12.873399715504979 - type: mrr_at_10 value: 22.245760798392343 - type: mrr_at_100 value: 23.2535995386754 - type: mrr_at_1000 value: 23.334415424798767 - type: mrr_at_20 value: 22.832835930440567 - type: mrr_at_3 value: 19.369369369369373 - type: mrr_at_5 value: 21.005215742057814 - type: nauc_map_at_1000_diff1 value: 8.530112404211962 - type: nauc_map_at_1000_max value: -0.610229019588028 - type: nauc_map_at_1000_std value: 13.439326858512171 - type: nauc_map_at_100_diff1 value: 8.5589640159379 - type: nauc_map_at_100_max value: -0.5944016197162708 - type: nauc_map_at_100_std value: 13.489115621758796 - type: nauc_map_at_10_diff1 value: 8.126257265087254 - type: nauc_map_at_10_max value: -0.9104792257460274 - type: nauc_map_at_10_std value: 12.551733293998016 - type: nauc_map_at_1_diff1 value: 10.304469275234254 - type: nauc_map_at_1_max value: -6.5207603714088656 - type: nauc_map_at_1_std value: 11.511920725984798 - type: nauc_map_at_20_diff1 value: 8.392360813027413 - type: nauc_map_at_20_max value: -0.5772326965536835 - type: nauc_map_at_20_std value: 13.095295660026284 - type: nauc_map_at_3_diff1 value: 7.589039911404434 - type: nauc_map_at_3_max value: -1.8859866214763326 - type: nauc_map_at_3_std value: 10.864914799543438 - type: nauc_map_at_5_diff1 value: 7.725834671182779 - type: nauc_map_at_5_max value: -2.0421876627364473 - type: nauc_map_at_5_std value: 11.53252283204264 - type: nauc_mrr_at_1000_diff1 value: 7.223924047392896 - type: nauc_mrr_at_1000_max value: -0.8237565406615028 - type: nauc_mrr_at_1000_std value: 12.970720666995705 - type: nauc_mrr_at_100_diff1 value: 7.255600764760114 - type: nauc_mrr_at_100_max value: -0.8075278980068191 - type: nauc_mrr_at_100_std value: 13.021245058986777 - type: nauc_mrr_at_10_diff1 value: 6.824168026586408 - type: nauc_mrr_at_10_max value: -1.1545141075883187 - type: nauc_mrr_at_10_std value: 12.092125297214492 - type: nauc_mrr_at_1_diff1 value: 8.38385763966794 - type: nauc_mrr_at_1_max value: -5.693722977782808 - type: nauc_mrr_at_1_std value: 10.330346403389063 - type: nauc_mrr_at_20_diff1 value: 7.116749792879911 - type: nauc_mrr_at_20_max value: -0.7639025085615958 - type: nauc_mrr_at_20_std value: 12.627125842400034 - type: nauc_mrr_at_3_diff1 value: 5.958470776046153 - type: nauc_mrr_at_3_max value: -2.397322655713469 - type: nauc_mrr_at_3_std value: 10.318678582593435 - type: nauc_mrr_at_5_diff1 value: 6.422851283076855 - type: nauc_mrr_at_5_max value: -2.228505094486492 - type: nauc_mrr_at_5_std value: 11.080240086741586 - type: nauc_ndcg_at_1000_diff1 value: 9.665675037862528 - type: nauc_ndcg_at_1000_max value: 1.8164643463570864 - type: nauc_ndcg_at_1000_std value: 17.273535340960105 - type: nauc_ndcg_at_100_diff1 value: 10.336717458605742 - type: nauc_ndcg_at_100_max value: 2.201049622861128 - type: nauc_ndcg_at_100_std value: 18.594513135944407 - type: nauc_ndcg_at_10_diff1 value: 8.580230032915912 - type: nauc_ndcg_at_10_max value: 1.6184519973149472 - type: nauc_ndcg_at_10_std value: 14.168601330751521 - type: nauc_ndcg_at_1_diff1 value: 10.304469275234254 - type: nauc_ndcg_at_1_max value: -6.5207603714088656 - type: nauc_ndcg_at_1_std value: 11.511920725984798 - type: nauc_ndcg_at_20_diff1 value: 9.452643320635774 - type: nauc_ndcg_at_20_max value: 2.649675021632715 - type: nauc_ndcg_at_20_std value: 15.848558428927983 - type: nauc_ndcg_at_3_diff1 value: 7.423680730820109 - type: nauc_ndcg_at_3_max value: -0.5241914531542782 - type: nauc_ndcg_at_3_std value: 10.79696943710403 - type: nauc_ndcg_at_5_diff1 value: 7.595280413445214 - type: nauc_ndcg_at_5_max value: -0.9084662101000812 - type: nauc_ndcg_at_5_std value: 11.89171024983937 - type: nauc_precision_at_1000_diff1 value: 17.671568881686063 - type: nauc_precision_at_1000_max value: 15.396853331313713 - type: nauc_precision_at_1000_std value: 51.45090306802372 - type: nauc_precision_at_100_diff1 value: 18.340171384916356 - type: nauc_precision_at_100_max value: 10.545554043869352 - type: nauc_precision_at_100_std value: 41.71442317028242 - type: nauc_precision_at_10_diff1 value: 10.046825528020882 - type: nauc_precision_at_10_max value: 7.8956170776495584 - type: nauc_precision_at_10_std value: 18.532526447633877 - type: nauc_precision_at_1_diff1 value: 10.304469275234254 - type: nauc_precision_at_1_max value: -6.5207603714088656 - type: nauc_precision_at_1_std value: 11.511920725984798 - type: nauc_precision_at_20_diff1 value: 12.951545972608155 - type: nauc_precision_at_20_max value: 11.389982355850425 - type: nauc_precision_at_20_std value: 24.00835254089037 - type: nauc_precision_at_3_diff1 value: 7.169726395090002 - type: nauc_precision_at_3_max value: 2.6355879106577915 - type: nauc_precision_at_3_std value: 10.664371283765304 - type: nauc_precision_at_5_diff1 value: 7.40977816055324 - type: nauc_precision_at_5_max value: 1.5419005218408786 - type: nauc_precision_at_5_std value: 12.808767406726606 - type: nauc_recall_at_1000_diff1 value: 17.67156888168616 - type: nauc_recall_at_1000_max value: 15.396853331313737 - type: nauc_recall_at_1000_std value: 51.450903068023635 - type: nauc_recall_at_100_diff1 value: 18.340171384916317 - type: nauc_recall_at_100_max value: 10.545554043869341 - type: nauc_recall_at_100_std value: 41.714423170282394 - type: nauc_recall_at_10_diff1 value: 10.046825528020864 - type: nauc_recall_at_10_max value: 7.895617077649546 - type: nauc_recall_at_10_std value: 18.532526447633852 - type: nauc_recall_at_1_diff1 value: 10.304469275234254 - type: nauc_recall_at_1_max value: -6.5207603714088656 - type: nauc_recall_at_1_std value: 11.511920725984798 - type: nauc_recall_at_20_diff1 value: 12.951545972608173 - type: nauc_recall_at_20_max value: 11.389982355850462 - type: nauc_recall_at_20_std value: 24.00835254089037 - type: nauc_recall_at_3_diff1 value: 7.169726395090035 - type: nauc_recall_at_3_max value: 2.63558791065783 - type: nauc_recall_at_3_std value: 10.664371283765313 - type: nauc_recall_at_5_diff1 value: 7.409778160553243 - type: nauc_recall_at_5_max value: 1.5419005218408781 - type: nauc_recall_at_5_std value: 12.808767406726599 - type: ndcg_at_1 value: 12.518 - type: ndcg_at_10 value: 27.472 - type: ndcg_at_100 value: 32.690000000000005 - type: ndcg_at_1000 value: 35.168 - type: ndcg_at_20 value: 29.54 - type: ndcg_at_3 value: 21.560000000000002 - type: ndcg_at_5 value: 24.415 - type: precision_at_1 value: 12.518 - type: precision_at_10 value: 4.459 - type: precision_at_100 value: 0.698 - type: precision_at_1000 value: 0.09 - type: precision_at_20 value: 2.635 - type: precision_at_3 value: 9.411999999999999 - type: precision_at_5 value: 7.041 - type: recall_at_1 value: 12.518 - type: recall_at_10 value: 44.595 - type: recall_at_100 value: 69.844 - type: recall_at_1000 value: 90.04299999999999 - type: recall_at_20 value: 52.703 - type: recall_at_3 value: 28.236 - type: recall_at_5 value: 35.205999999999996 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P (default) type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: main_score value: 30.11004370017134 - type: v_measure value: 30.11004370017134 - type: v_measure_std value: 14.335180861208965 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S (default) type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 19.69451845436496 - type: v_measure value: 19.69451845436496 - type: v_measure_std value: 15.444158883670541 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions (default) type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: main_score value: 51.39726079096234 - type: map value: 51.39726079096234 - type: mrr value: 64.94514795761333 - type: nAUC_map_diff1 value: 13.516398333452804 - type: nAUC_map_max value: 14.194223722139968 - type: nAUC_map_std value: 7.1226539793825925 - type: nAUC_mrr_diff1 value: 15.629882497094707 - type: nAUC_mrr_max value: 19.965579042518318 - type: nAUC_mrr_std value: 13.128556325737211 - task: type: STS dataset: name: MTEB BIOSSES (default) type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 72.9724950588563 - type: cosine_spearman value: 73.9095154037482 - type: euclidean_pearson value: 51.29126269915467 - type: euclidean_spearman value: 53.62953523835351 - type: main_score value: 73.9095154037482 - type: manhattan_pearson value: 47.93589517727305 - type: manhattan_spearman value: 50.323435810249705 - task: type: Classification dataset: name: MTEB Banking77Classification (default) type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 71.9448051948052 - type: f1 value: 72.03993637071432 - type: f1_weighted value: 72.03993637071433 - type: main_score value: 71.9448051948052 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P (default) type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: main_score value: 26.10044290663735 - type: v_measure value: 26.10044290663735 - type: v_measure_std value: 0.4850250523953905 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S (default) type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 13.37602244060575 - type: v_measure value: 13.37602244060575 - type: v_measure_std value: 0.6130095640974286 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval (default) type: mteb/cqadupstack-android config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: main_score value: 29.395 - type: map_at_1 value: 18.637 - type: map_at_10 value: 25.094 - type: map_at_100 value: 26.115 - type: map_at_1000 value: 26.259 - type: map_at_20 value: 25.594 - type: map_at_3 value: 23.058 - type: map_at_5 value: 24.035 - type: mrr_at_1 value: 23.74821173104435 - type: mrr_at_10 value: 30.000681245316418 - type: mrr_at_100 value: 30.8270542356755 - type: mrr_at_1000 value: 30.90779908348725 - type: mrr_at_20 value: 30.411377002067137 - type: mrr_at_3 value: 28.30233667143539 - type: mrr_at_5 value: 29.089175011921792 - type: nauc_map_at_1000_diff1 value: 45.44985213475472 - type: nauc_map_at_1000_max value: 31.85554173140791 - type: nauc_map_at_1000_std value: -11.81610624669214 - type: nauc_map_at_100_diff1 value: 45.43648978317603 - type: nauc_map_at_100_max value: 31.848529310376644 - type: nauc_map_at_100_std value: -11.856130231905329 - type: nauc_map_at_10_diff1 value: 45.58637687075205 - type: nauc_map_at_10_max value: 31.503326631977064 - type: nauc_map_at_10_std value: -12.265935940319157 - type: nauc_map_at_1_diff1 value: 52.119568023926696 - type: nauc_map_at_1_max value: 32.08497643741619 - type: nauc_map_at_1_std value: -13.283778697596544 - type: nauc_map_at_20_diff1 value: 45.577938396658155 - type: nauc_map_at_20_max value: 31.7637040046772 - type: nauc_map_at_20_std value: -12.020408001583228 - type: nauc_map_at_3_diff1 value: 46.75147654852644 - type: nauc_map_at_3_max value: 31.500448028398864 - type: nauc_map_at_3_std value: -13.393665476201624 - type: nauc_map_at_5_diff1 value: 46.36664145078783 - type: nauc_map_at_5_max value: 31.343431814823397 - type: nauc_map_at_5_std value: -12.865715523230763 - type: nauc_mrr_at_1000_diff1 value: 45.14925372989887 - type: nauc_mrr_at_1000_max value: 32.76618097106967 - type: nauc_mrr_at_1000_std value: -12.524563118199339 - type: nauc_mrr_at_100_diff1 value: 45.13373715931408 - type: nauc_mrr_at_100_max value: 32.77708930663885 - type: nauc_mrr_at_100_std value: -12.56931435530875 - type: nauc_mrr_at_10_diff1 value: 45.40838719763305 - type: nauc_mrr_at_10_max value: 32.762868559810784 - type: nauc_mrr_at_10_std value: -12.817907206821655 - type: nauc_mrr_at_1_diff1 value: 50.89134258279399 - type: nauc_mrr_at_1_max value: 34.37095750680418 - type: nauc_mrr_at_1_std value: -14.274479412853886 - type: nauc_mrr_at_20_diff1 value: 45.249541891203215 - type: nauc_mrr_at_20_max value: 32.82316951160751 - type: nauc_mrr_at_20_std value: -12.61002466497651 - type: nauc_mrr_at_3_diff1 value: 46.08602618931601 - type: nauc_mrr_at_3_max value: 32.941253888093804 - type: nauc_mrr_at_3_std value: -13.563733488369248 - type: nauc_mrr_at_5_diff1 value: 45.95778955086399 - type: nauc_mrr_at_5_max value: 32.89757778678747 - type: nauc_mrr_at_5_std value: -13.388699347312574 - type: nauc_ndcg_at_1000_diff1 value: 42.45552310071413 - type: nauc_ndcg_at_1000_max value: 31.851447169128853 - type: nauc_ndcg_at_1000_std value: -9.157899679842178 - type: nauc_ndcg_at_100_diff1 value: 42.13189912941783 - type: nauc_ndcg_at_100_max value: 31.920803739157755 - type: nauc_ndcg_at_100_std value: -10.133311348487833 - type: nauc_ndcg_at_10_diff1 value: 43.14532027005519 - type: nauc_ndcg_at_10_max value: 31.398484315040182 - type: nauc_ndcg_at_10_std value: -11.65740071892807 - type: nauc_ndcg_at_1_diff1 value: 50.89134258279399 - type: nauc_ndcg_at_1_max value: 34.37095750680418 - type: nauc_ndcg_at_1_std value: -14.274479412853886 - type: nauc_ndcg_at_20_diff1 value: 43.05955752222379 - type: nauc_ndcg_at_20_max value: 31.617775415149495 - type: nauc_ndcg_at_20_std value: -10.72382272385622 - type: nauc_ndcg_at_3_diff1 value: 44.345319690154334 - type: nauc_ndcg_at_3_max value: 31.860657201237984 - type: nauc_ndcg_at_3_std value: -13.201036742073732 - type: nauc_ndcg_at_5_diff1 value: 44.2321922039918 - type: nauc_ndcg_at_5_max value: 31.67328744227065 - type: nauc_ndcg_at_5_std value: -12.73240065162892 - type: nauc_precision_at_1000_diff1 value: 7.273423259712273 - type: nauc_precision_at_1000_max value: 6.764651727683099 - type: nauc_precision_at_1000_std value: -1.4966884360432018 - type: nauc_precision_at_100_diff1 value: 14.882288606927712 - type: nauc_precision_at_100_max value: 21.077880381393772 - type: nauc_precision_at_100_std value: -3.2549759401079776 - type: nauc_precision_at_10_diff1 value: 28.60830720280523 - type: nauc_precision_at_10_max value: 28.558880836815003 - type: nauc_precision_at_10_std value: -7.163122385852441 - type: nauc_precision_at_1_diff1 value: 50.89134258279399 - type: nauc_precision_at_1_max value: 34.37095750680418 - type: nauc_precision_at_1_std value: -14.274479412853886 - type: nauc_precision_at_20_diff1 value: 24.08438528220202 - type: nauc_precision_at_20_max value: 28.258801616588247 - type: nauc_precision_at_20_std value: -6.705830110580177 - type: nauc_precision_at_3_diff1 value: 37.93456250117405 - type: nauc_precision_at_3_max value: 31.243409463132032 - type: nauc_precision_at_3_std value: -12.59868434526981 - type: nauc_precision_at_5_diff1 value: 34.729490110300425 - type: nauc_precision_at_5_max value: 30.372494703283788 - type: nauc_precision_at_5_std value: -10.069026416856131 - type: nauc_recall_at_1000_diff1 value: 23.676039981996997 - type: nauc_recall_at_1000_max value: 26.693584681473453 - type: nauc_recall_at_1000_std value: 11.941818004042663 - type: nauc_recall_at_100_diff1 value: 26.974116632964495 - type: nauc_recall_at_100_max value: 28.1322789539008 - type: nauc_recall_at_100_std value: -2.793517857097065 - type: nauc_recall_at_10_diff1 value: 34.380731984563155 - type: nauc_recall_at_10_max value: 27.1153265513231 - type: nauc_recall_at_10_std value: -8.019251840545442 - type: nauc_recall_at_1_diff1 value: 52.119568023926696 - type: nauc_recall_at_1_max value: 32.08497643741619 - type: nauc_recall_at_1_std value: -13.283778697596544 - type: nauc_recall_at_20_diff1 value: 33.11437933898011 - type: nauc_recall_at_20_max value: 27.550021643829588 - type: nauc_recall_at_20_std value: -4.660461025976219 - type: nauc_recall_at_3_diff1 value: 39.80493501345255 - type: nauc_recall_at_3_max value: 28.954772937395923 - type: nauc_recall_at_3_std value: -12.62754725500984 - type: nauc_recall_at_5_diff1 value: 38.809559633465454 - type: nauc_recall_at_5_max value: 28.024304327517513 - type: nauc_recall_at_5_std value: -11.285144166535767 - type: ndcg_at_1 value: 23.748 - type: ndcg_at_10 value: 29.395 - type: ndcg_at_100 value: 34.314 - type: ndcg_at_1000 value: 37.422 - type: ndcg_at_20 value: 30.94 - type: ndcg_at_3 value: 26.317 - type: ndcg_at_5 value: 27.331 - type: precision_at_1 value: 23.748 - type: precision_at_10 value: 5.680000000000001 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.156 - type: precision_at_20 value: 3.4189999999999996 - type: precision_at_3 value: 12.637 - type: precision_at_5 value: 9.013 - type: recall_at_1 value: 18.637 - type: recall_at_10 value: 37.092000000000006 - type: recall_at_100 value: 59.556 - type: recall_at_1000 value: 80.739 - type: recall_at_20 value: 42.971 - type: recall_at_3 value: 27.276 - type: recall_at_5 value: 30.469 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval (default) type: mteb/cqadupstack-english config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: main_score value: 23.681 - type: map_at_1 value: 15.042 - type: map_at_10 value: 20.141000000000002 - type: map_at_100 value: 20.904 - type: map_at_1000 value: 21.023 - type: map_at_20 value: 20.523 - type: map_at_3 value: 18.482000000000003 - type: map_at_5 value: 19.345000000000002 - type: mrr_at_1 value: 19.23566878980892 - type: mrr_at_10 value: 24.451824891315347 - type: mrr_at_100 value: 25.134838805782923 - type: mrr_at_1000 value: 25.208900352642388 - type: mrr_at_20 value: 24.808984561765023 - type: mrr_at_3 value: 22.728237791932056 - type: mrr_at_5 value: 23.654989384288733 - type: nauc_map_at_1000_diff1 value: 40.798124812569384 - type: nauc_map_at_1000_max value: 14.81540926855082 - type: nauc_map_at_1000_std value: -5.05385945021678 - type: nauc_map_at_100_diff1 value: 40.829386744060805 - type: nauc_map_at_100_max value: 14.774380892904473 - type: nauc_map_at_100_std value: -5.117516079985261 - type: nauc_map_at_10_diff1 value: 41.15478642914214 - type: nauc_map_at_10_max value: 15.03768443891208 - type: nauc_map_at_10_std value: -5.641265905734448 - type: nauc_map_at_1_diff1 value: 47.54352356674396 - type: nauc_map_at_1_max value: 15.273312981192003 - type: nauc_map_at_1_std value: -6.169908596349911 - type: nauc_map_at_20_diff1 value: 40.97327855637148 - type: nauc_map_at_20_max value: 14.89105826755982 - type: nauc_map_at_20_std value: -5.364240018336858 - type: nauc_map_at_3_diff1 value: 42.231873075275615 - type: nauc_map_at_3_max value: 15.404317252294913 - type: nauc_map_at_3_std value: -5.678580756730022 - type: nauc_map_at_5_diff1 value: 41.66143830829381 - type: nauc_map_at_5_max value: 15.447251046070571 - type: nauc_map_at_5_std value: -5.837495650656335 - type: nauc_mrr_at_1000_diff1 value: 39.86420906999884 - type: nauc_mrr_at_1000_max value: 16.135989158186753 - type: nauc_mrr_at_1000_std value: -4.592451604982568 - type: nauc_mrr_at_100_diff1 value: 39.86122888458937 - type: nauc_mrr_at_100_max value: 16.105439789753422 - type: nauc_mrr_at_100_std value: -4.601044925036893 - type: nauc_mrr_at_10_diff1 value: 40.0914830828018 - type: nauc_mrr_at_10_max value: 16.316069710505907 - type: nauc_mrr_at_10_std value: -4.933931119120412 - type: nauc_mrr_at_1_diff1 value: 45.872319574398595 - type: nauc_mrr_at_1_max value: 17.714407532873587 - type: nauc_mrr_at_1_std value: -5.892428732338192 - type: nauc_mrr_at_20_diff1 value: 39.968104403603064 - type: nauc_mrr_at_20_max value: 16.250579894010908 - type: nauc_mrr_at_20_std value: -4.6913201222123115 - type: nauc_mrr_at_3_diff1 value: 40.98138119843196 - type: nauc_mrr_at_3_max value: 16.753412976976964 - type: nauc_mrr_at_3_std value: -4.862607910994618 - type: nauc_mrr_at_5_diff1 value: 40.51817434109358 - type: nauc_mrr_at_5_max value: 16.669114474829712 - type: nauc_mrr_at_5_std value: -5.0187913261619945 - type: nauc_ndcg_at_1000_diff1 value: 37.322805503060394 - type: nauc_ndcg_at_1000_max value: 14.074508601767524 - type: nauc_ndcg_at_1000_std value: -2.5684467253264294 - type: nauc_ndcg_at_100_diff1 value: 37.680833522451216 - type: nauc_ndcg_at_100_max value: 13.218661114047158 - type: nauc_ndcg_at_100_std value: -3.2872551022227774 - type: nauc_ndcg_at_10_diff1 value: 38.946324884525104 - type: nauc_ndcg_at_10_max value: 14.56340596052078 - type: nauc_ndcg_at_10_std value: -4.900816452861336 - type: nauc_ndcg_at_1_diff1 value: 45.872319574398595 - type: nauc_ndcg_at_1_max value: 17.714407532873587 - type: nauc_ndcg_at_1_std value: -5.892428732338192 - type: nauc_ndcg_at_20_diff1 value: 38.43824761822619 - type: nauc_ndcg_at_20_max value: 14.1179521561548 - type: nauc_ndcg_at_20_std value: -4.24942445066419 - type: nauc_ndcg_at_3_diff1 value: 40.14877067296726 - type: nauc_ndcg_at_3_max value: 15.867529420424223 - type: nauc_ndcg_at_3_std value: -4.932613633444065 - type: nauc_ndcg_at_5_diff1 value: 39.6102519927959 - type: nauc_ndcg_at_5_max value: 15.609756851439455 - type: nauc_ndcg_at_5_std value: -5.26940412982977 - type: nauc_precision_at_1000_diff1 value: -0.9954109220208948 - type: nauc_precision_at_1000_max value: 11.967578992629335 - type: nauc_precision_at_1000_std value: 9.014871288529228 - type: nauc_precision_at_100_diff1 value: 9.89964263137245 - type: nauc_precision_at_100_max value: 9.908412889796272 - type: nauc_precision_at_100_std value: 6.592828334609421 - type: nauc_precision_at_10_diff1 value: 24.28741469421518 - type: nauc_precision_at_10_max value: 13.455460040232389 - type: nauc_precision_at_10_std value: -2.3085437386023773 - type: nauc_precision_at_1_diff1 value: 45.872319574398595 - type: nauc_precision_at_1_max value: 17.714407532873587 - type: nauc_precision_at_1_std value: -5.892428732338192 - type: nauc_precision_at_20_diff1 value: 19.81016945673257 - type: nauc_precision_at_20_max value: 13.617095525972758 - type: nauc_precision_at_20_std value: 0.8956778782497932 - type: nauc_precision_at_3_diff1 value: 31.47336887855281 - type: nauc_precision_at_3_max value: 17.33370290746675 - type: nauc_precision_at_3_std value: -3.1323661307841824 - type: nauc_precision_at_5_diff1 value: 28.487140523674654 - type: nauc_precision_at_5_max value: 16.480475549147176 - type: nauc_precision_at_5_std value: -3.725675220452465 - type: nauc_recall_at_1000_diff1 value: 24.064977596273092 - type: nauc_recall_at_1000_max value: 9.481115572308768 - type: nauc_recall_at_1000_std value: 7.1196676914786305 - type: nauc_recall_at_100_diff1 value: 27.68996032837215 - type: nauc_recall_at_100_max value: 5.569519308774954 - type: nauc_recall_at_100_std value: 2.174562988626623 - type: nauc_recall_at_10_diff1 value: 33.4166528457575 - type: nauc_recall_at_10_max value: 11.526480134166073 - type: nauc_recall_at_10_std value: -3.9714508194727993 - type: nauc_recall_at_1_diff1 value: 47.54352356674396 - type: nauc_recall_at_1_max value: 15.273312981192003 - type: nauc_recall_at_1_std value: -6.169908596349911 - type: nauc_recall_at_20_diff1 value: 31.174108795272655 - type: nauc_recall_at_20_max value: 9.49403140642074 - type: nauc_recall_at_20_std value: -1.7053654233265276 - type: nauc_recall_at_3_diff1 value: 36.975946663308655 - type: nauc_recall_at_3_max value: 13.846841332248397 - type: nauc_recall_at_3_std value: -4.620179845226721 - type: nauc_recall_at_5_diff1 value: 35.32921422655988 - type: nauc_recall_at_5_max value: 13.64989734279998 - type: nauc_recall_at_5_std value: -5.11567851944459 - type: ndcg_at_1 value: 19.236 - type: ndcg_at_10 value: 23.681 - type: ndcg_at_100 value: 27.378000000000004 - type: ndcg_at_1000 value: 30.263 - type: ndcg_at_20 value: 24.869 - type: ndcg_at_3 value: 20.990000000000002 - type: ndcg_at_5 value: 22.112000000000002 - type: precision_at_1 value: 19.236 - type: precision_at_10 value: 4.561 - type: precision_at_100 value: 0.8130000000000001 - type: precision_at_1000 value: 0.131 - type: precision_at_20 value: 2.7390000000000003 - type: precision_at_3 value: 10.318 - type: precision_at_5 value: 7.35 - type: recall_at_1 value: 15.042 - type: recall_at_10 value: 29.768 - type: recall_at_100 value: 46.403 - type: recall_at_1000 value: 66.237 - type: recall_at_20 value: 34.172999999999995 - type: recall_at_3 value: 21.736 - type: recall_at_5 value: 24.909 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval (default) type: mteb/cqadupstack-gaming config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: main_score value: 33.292 - type: map_at_1 value: 21.258 - type: map_at_10 value: 28.968 - type: map_at_100 value: 29.87 - type: map_at_1000 value: 29.967 - type: map_at_20 value: 29.432000000000002 - type: map_at_3 value: 26.772000000000002 - type: map_at_5 value: 28.101 - type: mrr_at_1 value: 24.890282131661444 - type: mrr_at_10 value: 31.909364581778334 - type: mrr_at_100 value: 32.725252862811274 - type: mrr_at_1000 value: 32.79159341343287 - type: mrr_at_20 value: 32.341802530734604 - type: mrr_at_3 value: 29.916405433646787 - type: mrr_at_5 value: 31.148380355276856 - type: nauc_map_at_1000_diff1 value: 42.89898485719857 - type: nauc_map_at_1000_max value: 25.7601834300267 - type: nauc_map_at_1000_std value: -6.8075678780363065 - type: nauc_map_at_100_diff1 value: 42.88303590625868 - type: nauc_map_at_100_max value: 25.748000671003506 - type: nauc_map_at_100_std value: -6.880066721152117 - type: nauc_map_at_10_diff1 value: 42.97229732654456 - type: nauc_map_at_10_max value: 25.73618000005681 - type: nauc_map_at_10_std value: -7.416149046327371 - type: nauc_map_at_1_diff1 value: 47.83647924362518 - type: nauc_map_at_1_max value: 26.419122024307985 - type: nauc_map_at_1_std value: -11.21630730855683 - type: nauc_map_at_20_diff1 value: 42.92765195082491 - type: nauc_map_at_20_max value: 25.827247499669042 - type: nauc_map_at_20_std value: -7.25577277560294 - type: nauc_map_at_3_diff1 value: 43.25354682000908 - type: nauc_map_at_3_max value: 25.475868171430445 - type: nauc_map_at_3_std value: -8.736090401561846 - type: nauc_map_at_5_diff1 value: 43.19691208993581 - type: nauc_map_at_5_max value: 25.78026385201875 - type: nauc_map_at_5_std value: -7.410757681862602 - type: nauc_mrr_at_1000_diff1 value: 43.75833402735505 - type: nauc_mrr_at_1000_max value: 27.696252388661918 - type: nauc_mrr_at_1000_std value: -5.26810595515753 - type: nauc_mrr_at_100_diff1 value: 43.74265948204468 - type: nauc_mrr_at_100_max value: 27.68162942740836 - type: nauc_mrr_at_100_std value: -5.273744266587032 - type: nauc_mrr_at_10_diff1 value: 43.856655689435364 - type: nauc_mrr_at_10_max value: 27.850212260475832 - type: nauc_mrr_at_10_std value: -5.651027150885109 - type: nauc_mrr_at_1_diff1 value: 49.482145902956546 - type: nauc_mrr_at_1_max value: 29.40696837180673 - type: nauc_mrr_at_1_std value: -9.246840389820699 - type: nauc_mrr_at_20_diff1 value: 43.773790918590606 - type: nauc_mrr_at_20_max value: 27.813596603253572 - type: nauc_mrr_at_20_std value: -5.563343410112547 - type: nauc_mrr_at_3_diff1 value: 44.1863992693496 - type: nauc_mrr_at_3_max value: 27.975687791183194 - type: nauc_mrr_at_3_std value: -6.566771188686054 - type: nauc_mrr_at_5_diff1 value: 44.28502525647762 - type: nauc_mrr_at_5_max value: 28.22823260746294 - type: nauc_mrr_at_5_std value: -5.664969849271516 - type: nauc_ndcg_at_1000_diff1 value: 41.10279297865673 - type: nauc_ndcg_at_1000_max value: 25.15483651126361 - type: nauc_ndcg_at_1000_std value: -2.326246701669577 - type: nauc_ndcg_at_100_diff1 value: 40.68733230153336 - type: nauc_ndcg_at_100_max value: 24.920497030562395 - type: nauc_ndcg_at_100_std value: -3.2491794009868062 - type: nauc_ndcg_at_10_diff1 value: 41.22720271830224 - type: nauc_ndcg_at_10_max value: 25.591609324815213 - type: nauc_ndcg_at_10_std value: -5.930931282520972 - type: nauc_ndcg_at_1_diff1 value: 49.482145902956546 - type: nauc_ndcg_at_1_max value: 29.40696837180673 - type: nauc_ndcg_at_1_std value: -9.246840389820699 - type: nauc_ndcg_at_20_diff1 value: 41.02978538915726 - type: nauc_ndcg_at_20_max value: 25.692164466960982 - type: nauc_ndcg_at_20_std value: -5.481258872610866 - type: nauc_ndcg_at_3_diff1 value: 41.946789824936715 - type: nauc_ndcg_at_3_max value: 25.819165242311083 - type: nauc_ndcg_at_3_std value: -7.651398955832938 - type: nauc_ndcg_at_5_diff1 value: 41.89260094688737 - type: nauc_ndcg_at_5_max value: 26.092454786522957 - type: nauc_ndcg_at_5_std value: -5.831834443535013 - type: nauc_precision_at_1000_diff1 value: 7.432542782735883 - type: nauc_precision_at_1000_max value: 9.331553140370588 - type: nauc_precision_at_1000_std value: 28.23885596670155 - type: nauc_precision_at_100_diff1 value: 19.6661654667304 - type: nauc_precision_at_100_max value: 16.6847940189733 - type: nauc_precision_at_100_std value: 18.310214580560057 - type: nauc_precision_at_10_diff1 value: 32.10200793695008 - type: nauc_precision_at_10_max value: 23.864811590537947 - type: nauc_precision_at_10_std value: 1.4030918024799062 - type: nauc_precision_at_1_diff1 value: 49.482145902956546 - type: nauc_precision_at_1_max value: 29.40696837180673 - type: nauc_precision_at_1_std value: -9.246840389820699 - type: nauc_precision_at_20_diff1 value: 29.476241810558673 - type: nauc_precision_at_20_max value: 23.96668161723849 - type: nauc_precision_at_20_std value: 4.306914916353381 - type: nauc_precision_at_3_diff1 value: 36.06776696045971 - type: nauc_precision_at_3_max value: 25.929370510324745 - type: nauc_precision_at_3_std value: -3.7615220021347517 - type: nauc_precision_at_5_diff1 value: 35.32396504605641 - type: nauc_precision_at_5_max value: 25.95265820819126 - type: nauc_precision_at_5_std value: 1.1670946217187153 - type: nauc_recall_at_1000_diff1 value: 29.164546397383145 - type: nauc_recall_at_1000_max value: 15.621267941592098 - type: nauc_recall_at_1000_std value: 26.27547002407044 - type: nauc_recall_at_100_diff1 value: 29.153994431881447 - type: nauc_recall_at_100_max value: 16.748491583987608 - type: nauc_recall_at_100_std value: 9.462267347861445 - type: nauc_recall_at_10_diff1 value: 34.05468080049927 - type: nauc_recall_at_10_max value: 22.204610247322602 - type: nauc_recall_at_10_std value: -3.5086309143508814 - type: nauc_recall_at_1_diff1 value: 47.83647924362518 - type: nauc_recall_at_1_max value: 26.419122024307985 - type: nauc_recall_at_1_std value: -11.21630730855683 - type: nauc_recall_at_20_diff1 value: 33.12835279154617 - type: nauc_recall_at_20_max value: 22.306853620231067 - type: nauc_recall_at_20_std value: -2.033052592471381 - type: nauc_recall_at_3_diff1 value: 36.95894401551376 - type: nauc_recall_at_3_max value: 22.786504846733948 - type: nauc_recall_at_3_std value: -6.979614609488201 - type: nauc_recall_at_5_diff1 value: 36.46425114286232 - type: nauc_recall_at_5_max value: 23.920023442782707 - type: nauc_recall_at_5_std value: -3.0154588250727543 - type: ndcg_at_1 value: 24.89 - type: ndcg_at_10 value: 33.292 - type: ndcg_at_100 value: 37.901 - type: ndcg_at_1000 value: 40.285 - type: ndcg_at_20 value: 34.884 - type: ndcg_at_3 value: 29.238999999999997 - type: ndcg_at_5 value: 31.367 - type: precision_at_1 value: 24.89 - type: precision_at_10 value: 5.442 - type: precision_at_100 value: 0.849 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_20 value: 3.132 - type: precision_at_3 value: 13.312 - type: precision_at_5 value: 9.342 - type: recall_at_1 value: 21.258 - type: recall_at_10 value: 43.651 - type: recall_at_100 value: 64.885 - type: recall_at_1000 value: 82.248 - type: recall_at_20 value: 49.580999999999996 - type: recall_at_3 value: 32.625 - type: recall_at_5 value: 37.957 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval (default) type: mteb/cqadupstack-gis config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: main_score value: 16.07 - type: map_at_1 value: 9.415999999999999 - type: map_at_10 value: 13.513 - type: map_at_100 value: 14.224999999999998 - type: map_at_1000 value: 14.319 - type: map_at_20 value: 13.866 - type: map_at_3 value: 12.112 - type: map_at_5 value: 12.926000000000002 - type: mrr_at_1 value: 10.056497175141244 - type: mrr_at_10 value: 14.38808178638687 - type: mrr_at_100 value: 15.119805851668064 - type: mrr_at_1000 value: 15.205731909234233 - type: mrr_at_20 value: 14.75397466980612 - type: mrr_at_3 value: 12.862523540489635 - type: mrr_at_5 value: 13.77212806026365 - type: nauc_map_at_1000_diff1 value: 30.749479495702342 - type: nauc_map_at_1000_max value: 18.061350456252757 - type: nauc_map_at_1000_std value: -12.776716311222378 - type: nauc_map_at_100_diff1 value: 30.745239452411266 - type: nauc_map_at_100_max value: 18.00287521387654 - type: nauc_map_at_100_std value: -12.762249840519019 - type: nauc_map_at_10_diff1 value: 31.585120048362626 - type: nauc_map_at_10_max value: 18.39732510168566 - type: nauc_map_at_10_std value: -13.04298966234414 - type: nauc_map_at_1_diff1 value: 39.53220236575913 - type: nauc_map_at_1_max value: 19.291078384369037 - type: nauc_map_at_1_std value: -15.847700407312121 - type: nauc_map_at_20_diff1 value: 31.15292842056701 - type: nauc_map_at_20_max value: 18.13662897968119 - type: nauc_map_at_20_std value: -12.848240363122192 - type: nauc_map_at_3_diff1 value: 33.313193762120996 - type: nauc_map_at_3_max value: 18.718856751497455 - type: nauc_map_at_3_std value: -14.859757228365305 - type: nauc_map_at_5_diff1 value: 32.23145271963652 - type: nauc_map_at_5_max value: 18.686858524820614 - type: nauc_map_at_5_std value: -13.710819578206074 - type: nauc_mrr_at_1000_diff1 value: 29.8165900318886 - type: nauc_mrr_at_1000_max value: 20.23811240329599 - type: nauc_mrr_at_1000_std value: -11.894134707547828 - type: nauc_mrr_at_100_diff1 value: 29.79693245528083 - type: nauc_mrr_at_100_max value: 20.20487363279151 - type: nauc_mrr_at_100_std value: -11.8801471861184 - type: nauc_mrr_at_10_diff1 value: 30.555491785566787 - type: nauc_mrr_at_10_max value: 20.727418041975238 - type: nauc_mrr_at_10_std value: -12.17749828295938 - type: nauc_mrr_at_1_diff1 value: 38.471750784591066 - type: nauc_mrr_at_1_max value: 21.693359914033035 - type: nauc_mrr_at_1_std value: -15.027184530198495 - type: nauc_mrr_at_20_diff1 value: 30.124573695443253 - type: nauc_mrr_at_20_max value: 20.387777693647998 - type: nauc_mrr_at_20_std value: -11.986519353678883 - type: nauc_mrr_at_3_diff1 value: 31.924325436195495 - type: nauc_mrr_at_3_max value: 20.617013722734008 - type: nauc_mrr_at_3_std value: -14.110436011957422 - type: nauc_mrr_at_5_diff1 value: 30.993974966945082 - type: nauc_mrr_at_5_max value: 20.986844373402263 - type: nauc_mrr_at_5_std value: -12.69277901580161 - type: nauc_ndcg_at_1000_diff1 value: 25.982176878556317 - type: nauc_ndcg_at_1000_max value: 17.957848463581367 - type: nauc_ndcg_at_1000_std value: -10.478813728245443 - type: nauc_ndcg_at_100_diff1 value: 25.170121843912362 - type: nauc_ndcg_at_100_max value: 16.255524144508325 - type: nauc_ndcg_at_100_std value: -9.984533384788604 - type: nauc_ndcg_at_10_diff1 value: 28.577877265628548 - type: nauc_ndcg_at_10_max value: 18.13117862235857 - type: nauc_ndcg_at_10_std value: -10.906065025018682 - type: nauc_ndcg_at_1_diff1 value: 38.471750784591066 - type: nauc_ndcg_at_1_max value: 21.693359914033035 - type: nauc_ndcg_at_1_std value: -15.027184530198495 - type: nauc_ndcg_at_20_diff1 value: 27.10928770072782 - type: nauc_ndcg_at_20_max value: 17.30763169934487 - type: nauc_ndcg_at_20_std value: -10.399408092338273 - type: nauc_ndcg_at_3_diff1 value: 31.042482747608286 - type: nauc_ndcg_at_3_max value: 18.738158681504135 - type: nauc_ndcg_at_3_std value: -14.477327055073575 - type: nauc_ndcg_at_5_diff1 value: 29.660154138043147 - type: nauc_ndcg_at_5_max value: 18.84211095319927 - type: nauc_ndcg_at_5_std value: -12.327752711951133 - type: nauc_precision_at_1000_diff1 value: 7.633443912786396 - type: nauc_precision_at_1000_max value: 21.566986560692477 - type: nauc_precision_at_1000_std value: -2.445375271482855 - type: nauc_precision_at_100_diff1 value: 9.318121904657204 - type: nauc_precision_at_100_max value: 13.380483592987227 - type: nauc_precision_at_100_std value: -3.8250949950041795 - type: nauc_precision_at_10_diff1 value: 20.19648074695999 - type: nauc_precision_at_10_max value: 18.698777956049263 - type: nauc_precision_at_10_std value: -6.150147847173545 - type: nauc_precision_at_1_diff1 value: 38.471750784591066 - type: nauc_precision_at_1_max value: 21.693359914033035 - type: nauc_precision_at_1_std value: -15.027184530198495 - type: nauc_precision_at_20_diff1 value: 16.533311292140727 - type: nauc_precision_at_20_max value: 17.695708265296 - type: nauc_precision_at_20_std value: -4.705535015858615 - type: nauc_precision_at_3_diff1 value: 25.326780965328666 - type: nauc_precision_at_3_max value: 19.89839754219193 - type: nauc_precision_at_3_std value: -13.489806593621662 - type: nauc_precision_at_5_diff1 value: 22.651705939365204 - type: nauc_precision_at_5_max value: 20.655007483997082 - type: nauc_precision_at_5_std value: -9.011224009514967 - type: nauc_recall_at_1000_diff1 value: 15.004367482615095 - type: nauc_recall_at_1000_max value: 17.748576991915314 - type: nauc_recall_at_1000_std value: -6.336771887149544 - type: nauc_recall_at_100_diff1 value: 12.26225847741849 - type: nauc_recall_at_100_max value: 8.881243122304054 - type: nauc_recall_at_100_std value: -4.136516621641661 - type: nauc_recall_at_10_diff1 value: 22.427386838623846 - type: nauc_recall_at_10_max value: 15.21049389571777 - type: nauc_recall_at_10_std value: -6.30926628254321 - type: nauc_recall_at_1_diff1 value: 39.53220236575913 - type: nauc_recall_at_1_max value: 19.291078384369037 - type: nauc_recall_at_1_std value: -15.847700407312121 - type: nauc_recall_at_20_diff1 value: 18.637972178861908 - type: nauc_recall_at_20_max value: 12.960062439294784 - type: nauc_recall_at_20_std value: -5.432871665457346 - type: nauc_recall_at_3_diff1 value: 26.860762414942542 - type: nauc_recall_at_3_max value: 17.111730042893747 - type: nauc_recall_at_3_std value: -13.66463201462077 - type: nauc_recall_at_5_diff1 value: 24.99125073047622 - type: nauc_recall_at_5_max value: 17.157076930941727 - type: nauc_recall_at_5_std value: -9.709045620839477 - type: ndcg_at_1 value: 10.056 - type: ndcg_at_10 value: 16.07 - type: ndcg_at_100 value: 20.119999999999997 - type: ndcg_at_1000 value: 23.135 - type: ndcg_at_20 value: 17.379 - type: ndcg_at_3 value: 13.196 - type: ndcg_at_5 value: 14.667 - type: precision_at_1 value: 10.056 - type: precision_at_10 value: 2.621 - type: precision_at_100 value: 0.49500000000000005 - type: precision_at_1000 value: 0.08 - type: precision_at_20 value: 1.6049999999999998 - type: precision_at_3 value: 5.65 - type: precision_at_5 value: 4.226 - type: recall_at_1 value: 9.415999999999999 - type: recall_at_10 value: 23.146 - type: recall_at_100 value: 42.798 - type: recall_at_1000 value: 66.647 - type: recall_at_20 value: 28.222 - type: recall_at_3 value: 15.537 - type: recall_at_5 value: 18.971 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval (default) type: mteb/cqadupstack-mathematica config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: main_score value: 10.333 - type: map_at_1 value: 5.122 - type: map_at_10 value: 8.056000000000001 - type: map_at_100 value: 8.802 - type: map_at_1000 value: 8.912 - type: map_at_20 value: 8.415000000000001 - type: map_at_3 value: 7.045999999999999 - type: map_at_5 value: 7.504 - type: mrr_at_1 value: 6.467661691542288 - type: mrr_at_10 value: 9.997384111190083 - type: mrr_at_100 value: 10.780503968726906 - type: mrr_at_1000 value: 10.877815365669848 - type: mrr_at_20 value: 10.387299394522376 - type: mrr_at_3 value: 8.747927031509123 - type: mrr_at_5 value: 9.37603648424544 - type: nauc_map_at_1000_diff1 value: 16.627745082398647 - type: nauc_map_at_1000_max value: 13.859196754512038 - type: nauc_map_at_1000_std value: -2.0046507955951545 - type: nauc_map_at_100_diff1 value: 16.620272060480293 - type: nauc_map_at_100_max value: 13.888542915508207 - type: nauc_map_at_100_std value: -2.1508604539816405 - type: nauc_map_at_10_diff1 value: 16.54291042030997 - type: nauc_map_at_10_max value: 14.679948762856155 - type: nauc_map_at_10_std value: -2.0508176657469925 - type: nauc_map_at_1_diff1 value: 26.648517428464473 - type: nauc_map_at_1_max value: 14.172118938664543 - type: nauc_map_at_1_std value: -4.531793333515623 - type: nauc_map_at_20_diff1 value: 16.586117993573247 - type: nauc_map_at_20_max value: 13.902879810509836 - type: nauc_map_at_20_std value: -2.1637773579833284 - type: nauc_map_at_3_diff1 value: 17.292405890978245 - type: nauc_map_at_3_max value: 14.88845860580791 - type: nauc_map_at_3_std value: -3.8731525741198434 - type: nauc_map_at_5_diff1 value: 17.063873849249006 - type: nauc_map_at_5_max value: 14.472842242085832 - type: nauc_map_at_5_std value: -3.2215593846047637 - type: nauc_mrr_at_1000_diff1 value: 17.195672190983608 - type: nauc_mrr_at_1000_max value: 16.342766844618215 - type: nauc_mrr_at_1000_std value: -1.1235080643915678 - type: nauc_mrr_at_100_diff1 value: 17.139546677591238 - type: nauc_mrr_at_100_max value: 16.346425503757565 - type: nauc_mrr_at_100_std value: -1.2336496415510974 - type: nauc_mrr_at_10_diff1 value: 17.421668919941986 - type: nauc_mrr_at_10_max value: 17.033376602230828 - type: nauc_mrr_at_10_std value: -1.2493483044737175 - type: nauc_mrr_at_1_diff1 value: 26.65544099259078 - type: nauc_mrr_at_1_max value: 17.10769821821117 - type: nauc_mrr_at_1_std value: -2.72507465768404 - type: nauc_mrr_at_20_diff1 value: 17.123070882175753 - type: nauc_mrr_at_20_max value: 16.290797946719834 - type: nauc_mrr_at_20_std value: -1.0559190532852607 - type: nauc_mrr_at_3_diff1 value: 18.503311769244924 - type: nauc_mrr_at_3_max value: 17.660736027174302 - type: nauc_mrr_at_3_std value: -2.1922179141352234 - type: nauc_mrr_at_5_diff1 value: 17.87253349268872 - type: nauc_mrr_at_5_max value: 17.29405417834218 - type: nauc_mrr_at_5_std value: -2.276297588731558 - type: nauc_ndcg_at_1000_diff1 value: 14.450990987909975 - type: nauc_ndcg_at_1000_max value: 12.61179895702807 - type: nauc_ndcg_at_1000_std value: 2.1787457701847006 - type: nauc_ndcg_at_100_diff1 value: 13.868792706107108 - type: nauc_ndcg_at_100_max value: 12.876251575225254 - type: nauc_ndcg_at_100_std value: -0.9023302572828659 - type: nauc_ndcg_at_10_diff1 value: 14.18618751878955 - type: nauc_ndcg_at_10_max value: 15.44002664591339 - type: nauc_ndcg_at_10_std value: -0.2908150507923372 - type: nauc_ndcg_at_1_diff1 value: 26.65544099259078 - type: nauc_ndcg_at_1_max value: 17.10769821821117 - type: nauc_ndcg_at_1_std value: -2.72507465768404 - type: nauc_ndcg_at_20_diff1 value: 14.021582557942699 - type: nauc_ndcg_at_20_max value: 12.843878363016215 - type: nauc_ndcg_at_20_std value: -0.5317355206153845 - type: nauc_ndcg_at_3_diff1 value: 15.27030031763437 - type: nauc_ndcg_at_3_max value: 16.442777903842174 - type: nauc_ndcg_at_3_std value: -3.4853935802800864 - type: nauc_ndcg_at_5_diff1 value: 15.053308688870072 - type: nauc_ndcg_at_5_max value: 15.493086436510678 - type: nauc_ndcg_at_5_std value: -2.5841189511983695 - type: nauc_precision_at_1000_diff1 value: 5.162665834337446 - type: nauc_precision_at_1000_max value: 5.426553384527509 - type: nauc_precision_at_1000_std value: 6.1242440048302695 - type: nauc_precision_at_100_diff1 value: 5.240996534418689 - type: nauc_precision_at_100_max value: 9.06975798955498 - type: nauc_precision_at_100_std value: -2.961393279607517 - type: nauc_precision_at_10_diff1 value: 8.19432780347633 - type: nauc_precision_at_10_max value: 16.033136985617734 - type: nauc_precision_at_10_std value: 0.92060297716355 - type: nauc_precision_at_1_diff1 value: 26.65544099259078 - type: nauc_precision_at_1_max value: 17.10769821821117 - type: nauc_precision_at_1_std value: -2.72507465768404 - type: nauc_precision_at_20_diff1 value: 8.218392783839754 - type: nauc_precision_at_20_max value: 9.279320896895346 - type: nauc_precision_at_20_std value: 0.5719429607659788 - type: nauc_precision_at_3_diff1 value: 10.598049592179171 - type: nauc_precision_at_3_max value: 18.292981072202778 - type: nauc_precision_at_3_std value: -1.9747521095182612 - type: nauc_precision_at_5_diff1 value: 9.4592422188968 - type: nauc_precision_at_5_max value: 16.820892184546253 - type: nauc_precision_at_5_std value: -1.4503082963318303 - type: nauc_recall_at_1000_diff1 value: 11.42106802052846 - type: nauc_recall_at_1000_max value: 7.7142629478343965 - type: nauc_recall_at_1000_std value: 14.064107059885153 - type: nauc_recall_at_100_diff1 value: 9.533537910457907 - type: nauc_recall_at_100_max value: 8.918433756778455 - type: nauc_recall_at_100_std value: 0.6068026275245649 - type: nauc_recall_at_10_diff1 value: 9.410565718560424 - type: nauc_recall_at_10_max value: 15.389790528147987 - type: nauc_recall_at_10_std value: 2.911492221412525 - type: nauc_recall_at_1_diff1 value: 26.648517428464473 - type: nauc_recall_at_1_max value: 14.172118938664543 - type: nauc_recall_at_1_std value: -4.531793333515623 - type: nauc_recall_at_20_diff1 value: 9.507727153647583 - type: nauc_recall_at_20_max value: 8.659458970332985 - type: nauc_recall_at_20_std value: 1.564558976763232 - type: nauc_recall_at_3_diff1 value: 9.976406177297271 - type: nauc_recall_at_3_max value: 16.56979232924191 - type: nauc_recall_at_3_std value: -3.204552187951311 - type: nauc_recall_at_5_diff1 value: 10.283335368188732 - type: nauc_recall_at_5_max value: 14.869143869085146 - type: nauc_recall_at_5_std value: -1.3854541602405859 - type: ndcg_at_1 value: 6.468 - type: ndcg_at_10 value: 10.333 - type: ndcg_at_100 value: 14.437 - type: ndcg_at_1000 value: 17.7 - type: ndcg_at_20 value: 11.641 - type: ndcg_at_3 value: 8.222999999999999 - type: ndcg_at_5 value: 9.030000000000001 - type: precision_at_1 value: 6.468 - type: precision_at_10 value: 2.0650000000000004 - type: precision_at_100 value: 0.485 - type: precision_at_1000 value: 0.08800000000000001 - type: precision_at_20 value: 1.374 - type: precision_at_3 value: 4.063 - type: precision_at_5 value: 3.0349999999999997 - type: recall_at_1 value: 5.122 - type: recall_at_10 value: 15.494 - type: recall_at_100 value: 34.224 - type: recall_at_1000 value: 58.475 - type: recall_at_20 value: 20.281 - type: recall_at_3 value: 9.751999999999999 - type: recall_at_5 value: 11.654 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval (default) type: mteb/cqadupstack-physics config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: main_score value: 22.541 - type: map_at_1 value: 13.925 - type: map_at_10 value: 18.919 - type: map_at_100 value: 19.986 - type: map_at_1000 value: 20.122999999999998 - type: map_at_20 value: 19.454 - type: map_at_3 value: 17.128 - type: map_at_5 value: 18.203 - type: mrr_at_1 value: 16.93936477382098 - type: mrr_at_10 value: 22.677177383625892 - type: mrr_at_100 value: 23.604708246998403 - type: mrr_at_1000 value: 23.68613779725607 - type: mrr_at_20 value: 23.153477073193283 - type: mrr_at_3 value: 20.7571382739814 - type: mrr_at_5 value: 21.916907282643578 - type: nauc_map_at_1000_diff1 value: 39.76772856309066 - type: nauc_map_at_1000_max value: 22.353115497562158 - type: nauc_map_at_1000_std value: 0.3117135171829511 - type: nauc_map_at_100_diff1 value: 39.78846396189273 - type: nauc_map_at_100_max value: 22.363077131125365 - type: nauc_map_at_100_std value: 0.2284514348299411 - type: nauc_map_at_10_diff1 value: 39.81391249750955 - type: nauc_map_at_10_max value: 22.175966030251622 - type: nauc_map_at_10_std value: -0.44362610335129193 - type: nauc_map_at_1_diff1 value: 49.32991220296194 - type: nauc_map_at_1_max value: 24.83395680944923 - type: nauc_map_at_1_std value: -0.7479527140782966 - type: nauc_map_at_20_diff1 value: 39.88873226053775 - type: nauc_map_at_20_max value: 22.284944795016763 - type: nauc_map_at_20_std value: -0.1297029523950583 - type: nauc_map_at_3_diff1 value: 40.94117243505588 - type: nauc_map_at_3_max value: 23.178606683652237 - type: nauc_map_at_3_std value: -0.9328230609603833 - type: nauc_map_at_5_diff1 value: 39.70960944345954 - type: nauc_map_at_5_max value: 22.400765269020813 - type: nauc_map_at_5_std value: -0.4493564812963111 - type: nauc_mrr_at_1000_diff1 value: 38.09736089241541 - type: nauc_mrr_at_1000_max value: 24.95778301028415 - type: nauc_mrr_at_1000_std value: 2.1983425445724563 - type: nauc_mrr_at_100_diff1 value: 38.07672381248107 - type: nauc_mrr_at_100_max value: 24.974899996866757 - type: nauc_mrr_at_100_std value: 2.1882636690518256 - type: nauc_mrr_at_10_diff1 value: 38.031417501129106 - type: nauc_mrr_at_10_max value: 25.02204246091702 - type: nauc_mrr_at_10_std value: 1.7073869104185317 - type: nauc_mrr_at_1_diff1 value: 48.15437534861672 - type: nauc_mrr_at_1_max value: 28.63543344473674 - type: nauc_mrr_at_1_std value: 2.970876262345635 - type: nauc_mrr_at_20_diff1 value: 38.128248653080966 - type: nauc_mrr_at_20_max value: 24.952026253076998 - type: nauc_mrr_at_20_std value: 2.006922052216995 - type: nauc_mrr_at_3_diff1 value: 40.075767014514504 - type: nauc_mrr_at_3_max value: 26.543876767823356 - type: nauc_mrr_at_3_std value: 1.4758229539915473 - type: nauc_mrr_at_5_diff1 value: 38.27626231450101 - type: nauc_mrr_at_5_max value: 25.554184166817123 - type: nauc_mrr_at_5_std value: 1.5289469743765285 - type: nauc_ndcg_at_1000_diff1 value: 35.81305711429328 - type: nauc_ndcg_at_1000_max value: 21.462375611808884 - type: nauc_ndcg_at_1000_std value: 4.37817577864403 - type: nauc_ndcg_at_100_diff1 value: 35.931470390569075 - type: nauc_ndcg_at_100_max value: 21.320619926273025 - type: nauc_ndcg_at_100_std value: 3.261613822378584 - type: nauc_ndcg_at_10_diff1 value: 36.309714091319485 - type: nauc_ndcg_at_10_max value: 21.024554037914257 - type: nauc_ndcg_at_10_std value: 0.34537778188330615 - type: nauc_ndcg_at_1_diff1 value: 48.15437534861672 - type: nauc_ndcg_at_1_max value: 28.63543344473674 - type: nauc_ndcg_at_1_std value: 2.970876262345635 - type: nauc_ndcg_at_20_diff1 value: 36.55637547214553 - type: nauc_ndcg_at_20_max value: 21.054973880940498 - type: nauc_ndcg_at_20_std value: 1.255923276642131 - type: nauc_ndcg_at_3_diff1 value: 38.83527890609877 - type: nauc_ndcg_at_3_max value: 24.39276594538154 - type: nauc_ndcg_at_3_std value: -0.11070216705281503 - type: nauc_ndcg_at_5_diff1 value: 36.320235850347025 - type: nauc_ndcg_at_5_max value: 22.25222313573669 - type: nauc_ndcg_at_5_std value: 0.24418344534659714 - type: nauc_precision_at_1000_diff1 value: 1.3553366783310352 - type: nauc_precision_at_1000_max value: 12.71154662811487 - type: nauc_precision_at_1000_std value: 14.501530463627166 - type: nauc_precision_at_100_diff1 value: 13.594445633079498 - type: nauc_precision_at_100_max value: 22.831050695945486 - type: nauc_precision_at_100_std value: 12.58168655119079 - type: nauc_precision_at_10_diff1 value: 24.370335349509663 - type: nauc_precision_at_10_max value: 22.87333144912103 - type: nauc_precision_at_10_std value: 2.9640170457571395 - type: nauc_precision_at_1_diff1 value: 48.15437534861672 - type: nauc_precision_at_1_max value: 28.63543344473674 - type: nauc_precision_at_1_std value: 2.970876262345635 - type: nauc_precision_at_20_diff1 value: 22.437172356428768 - type: nauc_precision_at_20_max value: 22.84883486847393 - type: nauc_precision_at_20_std value: 5.539373045213645 - type: nauc_precision_at_3_diff1 value: 32.80281631101501 - type: nauc_precision_at_3_max value: 26.749107103708347 - type: nauc_precision_at_3_std value: 2.083560285617921 - type: nauc_precision_at_5_diff1 value: 25.857893194609087 - type: nauc_precision_at_5_max value: 24.006008172789514 - type: nauc_precision_at_5_std value: 2.6470647298583816 - type: nauc_recall_at_1000_diff1 value: 21.271914690867405 - type: nauc_recall_at_1000_max value: 10.8254772553339 - type: nauc_recall_at_1000_std value: 24.222690055658997 - type: nauc_recall_at_100_diff1 value: 24.83018631818402 - type: nauc_recall_at_100_max value: 12.260027028539406 - type: nauc_recall_at_100_std value: 11.721583106210975 - type: nauc_recall_at_10_diff1 value: 28.25565512580088 - type: nauc_recall_at_10_max value: 14.450763859357815 - type: nauc_recall_at_10_std value: 0.7801836768161626 - type: nauc_recall_at_1_diff1 value: 49.32991220296194 - type: nauc_recall_at_1_max value: 24.83395680944923 - type: nauc_recall_at_1_std value: -0.7479527140782966 - type: nauc_recall_at_20_diff1 value: 28.871593968850156 - type: nauc_recall_at_20_max value: 13.961700743219929 - type: nauc_recall_at_20_std value: 3.5643293197299615 - type: nauc_recall_at_3_diff1 value: 32.57328129531904 - type: nauc_recall_at_3_max value: 20.433413425310835 - type: nauc_recall_at_3_std value: -1.247044503598521 - type: nauc_recall_at_5_diff1 value: 28.028510688953183 - type: nauc_recall_at_5_max value: 16.784307010617596 - type: nauc_recall_at_5_std value: -0.009997139996257565 - type: ndcg_at_1 value: 16.939 - type: ndcg_at_10 value: 22.541 - type: ndcg_at_100 value: 27.921000000000003 - type: ndcg_at_1000 value: 31.102 - type: ndcg_at_20 value: 24.285999999999998 - type: ndcg_at_3 value: 19.304 - type: ndcg_at_5 value: 20.996000000000002 - type: precision_at_1 value: 16.939 - type: precision_at_10 value: 4.186999999999999 - type: precision_at_100 value: 0.851 - type: precision_at_1000 value: 0.131 - type: precision_at_20 value: 2.656 - type: precision_at_3 value: 8.919 - type: precision_at_5 value: 6.641 - type: recall_at_1 value: 13.925 - type: recall_at_10 value: 29.826999999999998 - type: recall_at_100 value: 53.76800000000001 - type: recall_at_1000 value: 75.994 - type: recall_at_20 value: 35.947 - type: recall_at_3 value: 20.929000000000002 - type: recall_at_5 value: 25.202999999999996 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval (default) type: mteb/cqadupstack-programmers config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: main_score value: 16.89 - type: map_at_1 value: 9.166 - type: map_at_10 value: 13.538 - type: map_at_100 value: 14.338999999999999 - type: map_at_1000 value: 14.471 - type: map_at_20 value: 13.916999999999998 - type: map_at_3 value: 11.748 - type: map_at_5 value: 12.751000000000001 - type: mrr_at_1 value: 11.643835616438356 - type: mrr_at_10 value: 16.520575125027168 - type: mrr_at_100 value: 17.297302503248996 - type: mrr_at_1000 value: 17.398178665590223 - type: mrr_at_20 value: 16.91999523594904 - type: mrr_at_3 value: 14.573820395738199 - type: mrr_at_5 value: 15.646879756468794 - type: nauc_map_at_1000_diff1 value: 36.42648210684073 - type: nauc_map_at_1000_max value: 23.014439347329745 - type: nauc_map_at_1000_std value: 1.7167917957352532 - type: nauc_map_at_100_diff1 value: 36.41668695392086 - type: nauc_map_at_100_max value: 22.95286918473154 - type: nauc_map_at_100_std value: 1.6607854131698931 - type: nauc_map_at_10_diff1 value: 36.853249061667704 - type: nauc_map_at_10_max value: 23.30746444964867 - type: nauc_map_at_10_std value: 0.8047283371322353 - type: nauc_map_at_1_diff1 value: 47.16421621003639 - type: nauc_map_at_1_max value: 27.34193393838306 - type: nauc_map_at_1_std value: 0.6408395204554622 - type: nauc_map_at_20_diff1 value: 36.56584303750146 - type: nauc_map_at_20_max value: 23.115780372564476 - type: nauc_map_at_20_std value: 1.249550410204099 - type: nauc_map_at_3_diff1 value: 40.53580184557388 - type: nauc_map_at_3_max value: 23.635347744137672 - type: nauc_map_at_3_std value: 0.33170039388290995 - type: nauc_map_at_5_diff1 value: 37.81956825949432 - type: nauc_map_at_5_max value: 23.801068349520698 - type: nauc_map_at_5_std value: -0.05159349623603464 - type: nauc_mrr_at_1000_diff1 value: 33.82170381349714 - type: nauc_mrr_at_1000_max value: 24.509695389655278 - type: nauc_mrr_at_1000_std value: 0.38761162146831024 - type: nauc_mrr_at_100_diff1 value: 33.78083256685757 - type: nauc_mrr_at_100_max value: 24.46949787827838 - type: nauc_mrr_at_100_std value: 0.3727304295879898 - type: nauc_mrr_at_10_diff1 value: 34.04995222179279 - type: nauc_mrr_at_10_max value: 24.844254940118603 - type: nauc_mrr_at_10_std value: -0.09989395943351509 - type: nauc_mrr_at_1_diff1 value: 42.60409022051744 - type: nauc_mrr_at_1_max value: 28.557152433476706 - type: nauc_mrr_at_1_std value: -0.022054720915518654 - type: nauc_mrr_at_20_diff1 value: 33.87215561918837 - type: nauc_mrr_at_20_max value: 24.678806836379767 - type: nauc_mrr_at_20_std value: 0.07011412656469218 - type: nauc_mrr_at_3_diff1 value: 37.553351431355416 - type: nauc_mrr_at_3_max value: 24.96142716696304 - type: nauc_mrr_at_3_std value: 0.20818976575893774 - type: nauc_mrr_at_5_diff1 value: 34.990863336264105 - type: nauc_mrr_at_5_max value: 25.149251424623092 - type: nauc_mrr_at_5_std value: -0.36385730855435344 - type: nauc_ndcg_at_1000_diff1 value: 31.521772887139164 - type: nauc_ndcg_at_1000_max value: 21.820611295854476 - type: nauc_ndcg_at_1000_std value: 5.744438883711709 - type: nauc_ndcg_at_100_diff1 value: 30.860742071525365 - type: nauc_ndcg_at_100_max value: 20.333360034062228 - type: nauc_ndcg_at_100_std value: 4.817571323412305 - type: nauc_ndcg_at_10_diff1 value: 32.02591793840569 - type: nauc_ndcg_at_10_max value: 22.327582801844766 - type: nauc_ndcg_at_10_std value: 1.308815569375002 - type: nauc_ndcg_at_1_diff1 value: 42.60409022051744 - type: nauc_ndcg_at_1_max value: 28.557152433476706 - type: nauc_ndcg_at_1_std value: -0.022054720915518654 - type: nauc_ndcg_at_20_diff1 value: 31.183844509937447 - type: nauc_ndcg_at_20_max value: 21.710204283748464 - type: nauc_ndcg_at_20_std value: 2.3543373338618716 - type: nauc_ndcg_at_3_diff1 value: 37.757093644477195 - type: nauc_ndcg_at_3_max value: 23.3515751628835 - type: nauc_ndcg_at_3_std value: 0.5117507109615564 - type: nauc_ndcg_at_5_diff1 value: 33.80970150542254 - type: nauc_ndcg_at_5_max value: 23.377489792676403 - type: nauc_ndcg_at_5_std value: -0.2893341840565308 - type: nauc_precision_at_1000_diff1 value: 3.707208967665837 - type: nauc_precision_at_1000_max value: 12.034292018846514 - type: nauc_precision_at_1000_std value: 6.802731430305505 - type: nauc_precision_at_100_diff1 value: 12.426875443830042 - type: nauc_precision_at_100_max value: 12.988732249870225 - type: nauc_precision_at_100_std value: 11.037489289119383 - type: nauc_precision_at_10_diff1 value: 19.964451016510218 - type: nauc_precision_at_10_max value: 21.483257270810522 - type: nauc_precision_at_10_std value: 2.2065598381345053 - type: nauc_precision_at_1_diff1 value: 42.60409022051744 - type: nauc_precision_at_1_max value: 28.557152433476706 - type: nauc_precision_at_1_std value: -0.022054720915518654 - type: nauc_precision_at_20_diff1 value: 17.519760734491374 - type: nauc_precision_at_20_max value: 19.42156895187867 - type: nauc_precision_at_20_std value: 5.58566386311753 - type: nauc_precision_at_3_diff1 value: 30.863362948010643 - type: nauc_precision_at_3_max value: 21.97149191045173 - type: nauc_precision_at_3_std value: -0.10795969935082905 - type: nauc_precision_at_5_diff1 value: 24.57403889839064 - type: nauc_precision_at_5_max value: 23.330523157159384 - type: nauc_precision_at_5_std value: -0.5736565687187795 - type: nauc_recall_at_1000_diff1 value: 21.845537827759255 - type: nauc_recall_at_1000_max value: 16.85933147171258 - type: nauc_recall_at_1000_std value: 22.408020236230566 - type: nauc_recall_at_100_diff1 value: 19.987143599818943 - type: nauc_recall_at_100_max value: 10.475075018778545 - type: nauc_recall_at_100_std value: 13.795219707527833 - type: nauc_recall_at_10_diff1 value: 22.012495555108874 - type: nauc_recall_at_10_max value: 17.742806672295814 - type: nauc_recall_at_10_std value: 3.3663340109082194 - type: nauc_recall_at_1_diff1 value: 47.16421621003639 - type: nauc_recall_at_1_max value: 27.34193393838306 - type: nauc_recall_at_1_std value: 0.6408395204554622 - type: nauc_recall_at_20_diff1 value: 20.24245341403342 - type: nauc_recall_at_20_max value: 16.292684691149837 - type: nauc_recall_at_20_std value: 5.732480922479413 - type: nauc_recall_at_3_diff1 value: 34.061353914493004 - type: nauc_recall_at_3_max value: 19.701505268864018 - type: nauc_recall_at_3_std value: 0.15707036102604408 - type: nauc_recall_at_5_diff1 value: 25.41386728745299 - type: nauc_recall_at_5_max value: 19.7756818671563 - type: nauc_recall_at_5_std value: -1.0264446116247112 - type: ndcg_at_1 value: 11.644 - type: ndcg_at_10 value: 16.89 - type: ndcg_at_100 value: 21.104 - type: ndcg_at_1000 value: 24.669 - type: ndcg_at_20 value: 18.195 - type: ndcg_at_3 value: 13.350999999999999 - type: ndcg_at_5 value: 15.02 - type: precision_at_1 value: 11.644 - type: precision_at_10 value: 3.276 - type: precision_at_100 value: 0.652 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_20 value: 2.043 - type: precision_at_3 value: 6.3549999999999995 - type: precision_at_5 value: 4.8629999999999995 - type: recall_at_1 value: 9.166 - type: recall_at_10 value: 24.38 - type: recall_at_100 value: 43.174 - type: recall_at_1000 value: 69.063 - type: recall_at_20 value: 28.89 - type: recall_at_3 value: 14.674999999999999 - type: recall_at_5 value: 18.864 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval (default) type: CQADupstackRetrieval_is_a_combined_dataset config: default split: test revision: CQADupstackRetrieval_is_a_combined_dataset metrics: - type: main_score value: 19.451833333333333 - type: ndcg_at_10 value: 19.451833333333333 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval (default) type: mteb/cqadupstack-stats config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: main_score value: 15.190000000000001 - type: map_at_1 value: 8.588 - type: map_at_10 value: 12.491 - type: map_at_100 value: 13.181000000000001 - type: map_at_1000 value: 13.272 - type: map_at_20 value: 12.803 - type: map_at_3 value: 11.171000000000001 - type: map_at_5 value: 11.792 - type: mrr_at_1 value: 10.122699386503067 - type: mrr_at_10 value: 14.334769695199148 - type: mrr_at_100 value: 15.038531985477402 - type: mrr_at_1000 value: 15.118584906152948 - type: mrr_at_20 value: 14.643456341375582 - type: mrr_at_3 value: 13.011247443762786 - type: mrr_at_5 value: 13.586400817995917 - type: nauc_map_at_1000_diff1 value: 32.49525214361852 - type: nauc_map_at_1000_max value: 25.00989242287795 - type: nauc_map_at_1000_std value: -6.0481296083442215 - type: nauc_map_at_100_diff1 value: 32.58412301567017 - type: nauc_map_at_100_max value: 25.00710798346013 - type: nauc_map_at_100_std value: -6.027212357257859 - type: nauc_map_at_10_diff1 value: 32.59408959509193 - type: nauc_map_at_10_max value: 25.590812515768057 - type: nauc_map_at_10_std value: -6.723358516793515 - type: nauc_map_at_1_diff1 value: 39.31467044788035 - type: nauc_map_at_1_max value: 30.076159948793276 - type: nauc_map_at_1_std value: -7.409917402741314 - type: nauc_map_at_20_diff1 value: 32.59390259000842 - type: nauc_map_at_20_max value: 25.24747833386027 - type: nauc_map_at_20_std value: -6.327479010788288 - type: nauc_map_at_3_diff1 value: 34.27305943120105 - type: nauc_map_at_3_max value: 27.325746934815616 - type: nauc_map_at_3_std value: -7.588768866133594 - type: nauc_map_at_5_diff1 value: 33.084018261535256 - type: nauc_map_at_5_max value: 26.240785153709425 - type: nauc_map_at_5_std value: -7.145825000341606 - type: nauc_mrr_at_1000_diff1 value: 32.13146292629234 - type: nauc_mrr_at_1000_max value: 27.012685186249 - type: nauc_mrr_at_1000_std value: -3.576499416328648 - type: nauc_mrr_at_100_diff1 value: 32.1598198156621 - type: nauc_mrr_at_100_max value: 26.99007757074476 - type: nauc_mrr_at_100_std value: -3.5328041627513387 - type: nauc_mrr_at_10_diff1 value: 32.2769559954424 - type: nauc_mrr_at_10_max value: 27.671797146230915 - type: nauc_mrr_at_10_std value: -4.014326165260914 - type: nauc_mrr_at_1_diff1 value: 39.49445020079931 - type: nauc_mrr_at_1_max value: 32.47498778564666 - type: nauc_mrr_at_1_std value: -3.9005316134362285 - type: nauc_mrr_at_20_diff1 value: 32.1506954430531 - type: nauc_mrr_at_20_max value: 27.21472311716892 - type: nauc_mrr_at_20_std value: -3.8339274287542295 - type: nauc_mrr_at_3_diff1 value: 34.213957754732874 - type: nauc_mrr_at_3_max value: 29.81396274867843 - type: nauc_mrr_at_3_std value: -4.242564017046673 - type: nauc_mrr_at_5_diff1 value: 32.79023586229421 - type: nauc_mrr_at_5_max value: 28.563242912189224 - type: nauc_mrr_at_5_std value: -4.347078530440767 - type: nauc_ndcg_at_1000_diff1 value: 28.030132389809143 - type: nauc_ndcg_at_1000_max value: 20.521142889145125 - type: nauc_ndcg_at_1000_std value: -3.4641513799298465 - type: nauc_ndcg_at_100_diff1 value: 29.790867206467205 - type: nauc_ndcg_at_100_max value: 20.777998695211025 - type: nauc_ndcg_at_100_std value: -3.082355174684713 - type: nauc_ndcg_at_10_diff1 value: 29.99477135479973 - type: nauc_ndcg_at_10_max value: 23.59847010475954 - type: nauc_ndcg_at_10_std value: -5.388778425113355 - type: nauc_ndcg_at_1_diff1 value: 39.49445020079931 - type: nauc_ndcg_at_1_max value: 32.47498778564666 - type: nauc_ndcg_at_1_std value: -3.9005316134362285 - type: nauc_ndcg_at_20_diff1 value: 29.832962796031044 - type: nauc_ndcg_at_20_max value: 22.19789441941385 - type: nauc_ndcg_at_20_std value: -4.678750624503098 - type: nauc_ndcg_at_3_diff1 value: 33.28264932851035 - type: nauc_ndcg_at_3_max value: 27.237791722895505 - type: nauc_ndcg_at_3_std value: -6.42213360173857 - type: nauc_ndcg_at_5_diff1 value: 31.131290570314228 - type: nauc_ndcg_at_5_max value: 25.12722717817001 - type: nauc_ndcg_at_5_std value: -6.150569476219248 - type: nauc_precision_at_1000_diff1 value: 9.392568676712683 - type: nauc_precision_at_1000_max value: 11.20864013974632 - type: nauc_precision_at_1000_std value: 5.320810472292775 - type: nauc_precision_at_100_diff1 value: 23.329271108392348 - type: nauc_precision_at_100_max value: 15.096990134028458 - type: nauc_precision_at_100_std value: 6.463877644271909 - type: nauc_precision_at_10_diff1 value: 26.07195079393671 - type: nauc_precision_at_10_max value: 23.315213833722375 - type: nauc_precision_at_10_std value: -0.7973933486646361 - type: nauc_precision_at_1_diff1 value: 39.49445020079931 - type: nauc_precision_at_1_max value: 32.47498778564666 - type: nauc_precision_at_1_std value: -3.9005316134362285 - type: nauc_precision_at_20_diff1 value: 26.006356559701437 - type: nauc_precision_at_20_max value: 20.64452647574728 - type: nauc_precision_at_20_std value: 1.186976191997027 - type: nauc_precision_at_3_diff1 value: 31.349575990830747 - type: nauc_precision_at_3_max value: 27.619655967592983 - type: nauc_precision_at_3_std value: -3.5875703843406144 - type: nauc_precision_at_5_diff1 value: 28.056629721139153 - type: nauc_precision_at_5_max value: 24.93477215782415 - type: nauc_precision_at_5_std value: -2.07688747626092 - type: nauc_recall_at_1000_diff1 value: 11.939738127565153 - type: nauc_recall_at_1000_max value: 3.1013420342149427 - type: nauc_recall_at_1000_std value: 0.42106295882988565 - type: nauc_recall_at_100_diff1 value: 23.1148888679206 - type: nauc_recall_at_100_max value: 7.879492884697378 - type: nauc_recall_at_100_std value: 1.9008293630458633 - type: nauc_recall_at_10_diff1 value: 23.290862746428513 - type: nauc_recall_at_10_max value: 16.127629443707487 - type: nauc_recall_at_10_std value: -4.448472009523851 - type: nauc_recall_at_1_diff1 value: 39.31467044788035 - type: nauc_recall_at_1_max value: 30.076159948793276 - type: nauc_recall_at_1_std value: -7.409917402741314 - type: nauc_recall_at_20_diff1 value: 23.189927344334322 - type: nauc_recall_at_20_max value: 12.404091273454796 - type: nauc_recall_at_20_std value: -3.1379735901683317 - type: nauc_recall_at_3_diff1 value: 29.35343707457242 - type: nauc_recall_at_3_max value: 23.518636184215154 - type: nauc_recall_at_3_std value: -6.676520147409216 - type: nauc_recall_at_5_diff1 value: 25.982556962678487 - type: nauc_recall_at_5_max value: 19.86486077269299 - type: nauc_recall_at_5_std value: -6.003801784768082 - type: ndcg_at_1 value: 10.123 - type: ndcg_at_10 value: 15.190000000000001 - type: ndcg_at_100 value: 19.052 - type: ndcg_at_1000 value: 21.769 - type: ndcg_at_20 value: 16.298000000000002 - type: ndcg_at_3 value: 12.589 - type: ndcg_at_5 value: 13.535 - type: precision_at_1 value: 10.123 - type: precision_at_10 value: 2.6839999999999997 - type: precision_at_100 value: 0.503 - type: precision_at_1000 value: 0.08 - type: precision_at_20 value: 1.603 - type: precision_at_3 value: 5.726 - type: precision_at_5 value: 4.109999999999999 - type: recall_at_1 value: 8.588 - type: recall_at_10 value: 21.834 - type: recall_at_100 value: 40.309 - type: recall_at_1000 value: 61.208 - type: recall_at_20 value: 26.070999999999998 - type: recall_at_3 value: 14.399000000000001 - type: recall_at_5 value: 16.875999999999998 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval (default) type: mteb/cqadupstack-tex config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: main_score value: 11.503 - type: map_at_1 value: 6.542000000000001 - type: map_at_10 value: 9.411999999999999 - type: map_at_100 value: 10.030999999999999 - type: map_at_1000 value: 10.14 - type: map_at_20 value: 9.724 - type: map_at_3 value: 8.509 - type: map_at_5 value: 8.965 - type: mrr_at_1 value: 8.121128699242945 - type: mrr_at_10 value: 11.487303225947409 - type: mrr_at_100 value: 12.144070985687668 - type: mrr_at_1000 value: 12.23492200312306 - type: mrr_at_20 value: 11.824789289064652 - type: mrr_at_3 value: 10.438173893094747 - type: mrr_at_5 value: 10.945744436797428 - type: nauc_map_at_1000_diff1 value: 32.70276581980958 - type: nauc_map_at_1000_max value: 16.03417959943129 - type: nauc_map_at_1000_std value: -5.72561310082251 - type: nauc_map_at_100_diff1 value: 32.74170233438755 - type: nauc_map_at_100_max value: 16.007188000450924 - type: nauc_map_at_100_std value: -5.866527320820588 - type: nauc_map_at_10_diff1 value: 33.65756116022195 - type: nauc_map_at_10_max value: 16.329704041974207 - type: nauc_map_at_10_std value: -6.532157318286642 - type: nauc_map_at_1_diff1 value: 42.13696871713339 - type: nauc_map_at_1_max value: 17.632090262590623 - type: nauc_map_at_1_std value: -7.011301507001842 - type: nauc_map_at_20_diff1 value: 32.96793409764783 - type: nauc_map_at_20_max value: 16.11279519186098 - type: nauc_map_at_20_std value: -6.316702747144485 - type: nauc_map_at_3_diff1 value: 35.85582815528229 - type: nauc_map_at_3_max value: 17.119718606824765 - type: nauc_map_at_3_std value: -6.75128616063151 - type: nauc_map_at_5_diff1 value: 34.703608964177015 - type: nauc_map_at_5_max value: 16.774418221756946 - type: nauc_map_at_5_std value: -6.7924413895275135 - type: nauc_mrr_at_1000_diff1 value: 33.25123047452874 - type: nauc_mrr_at_1000_max value: 17.664781297091984 - type: nauc_mrr_at_1000_std value: -4.883960114347252 - type: nauc_mrr_at_100_diff1 value: 33.26376684107494 - type: nauc_mrr_at_100_max value: 17.660366713140917 - type: nauc_mrr_at_100_std value: -4.936094906621694 - type: nauc_mrr_at_10_diff1 value: 34.14453970601731 - type: nauc_mrr_at_10_max value: 18.078450957158427 - type: nauc_mrr_at_10_std value: -5.56029931021929 - type: nauc_mrr_at_1_diff1 value: 42.624124463773974 - type: nauc_mrr_at_1_max value: 19.644592703779377 - type: nauc_mrr_at_1_std value: -6.847467406875957 - type: nauc_mrr_at_20_diff1 value: 33.48658556695367 - type: nauc_mrr_at_20_max value: 17.854173270865513 - type: nauc_mrr_at_20_std value: -5.307384000928626 - type: nauc_mrr_at_3_diff1 value: 36.42777944064556 - type: nauc_mrr_at_3_max value: 18.818021509412347 - type: nauc_mrr_at_3_std value: -5.971767723227725 - type: nauc_mrr_at_5_diff1 value: 35.26890794067812 - type: nauc_mrr_at_5_max value: 18.536432127845615 - type: nauc_mrr_at_5_std value: -5.955315816111514 - type: nauc_ndcg_at_1000_diff1 value: 26.787545842668386 - type: nauc_ndcg_at_1000_max value: 14.668417213125176 - type: nauc_ndcg_at_1000_std value: 0.11283761427226682 - type: nauc_ndcg_at_100_diff1 value: 27.296346462130778 - type: nauc_ndcg_at_100_max value: 14.628630017107083 - type: nauc_ndcg_at_100_std value: -2.5838126321301287 - type: nauc_ndcg_at_10_diff1 value: 30.729975615630583 - type: nauc_ndcg_at_10_max value: 15.984165870709463 - type: nauc_ndcg_at_10_std value: -5.795796151010406 - type: nauc_ndcg_at_1_diff1 value: 42.624124463773974 - type: nauc_ndcg_at_1_max value: 19.644592703779377 - type: nauc_ndcg_at_1_std value: -6.847467406875957 - type: nauc_ndcg_at_20_diff1 value: 28.62024015680217 - type: nauc_ndcg_at_20_max value: 15.22451859400659 - type: nauc_ndcg_at_20_std value: -5.156813837280861 - type: nauc_ndcg_at_3_diff1 value: 34.82831844406019 - type: nauc_ndcg_at_3_max value: 17.789223218636945 - type: nauc_ndcg_at_3_std value: -6.383595531284539 - type: nauc_ndcg_at_5_diff1 value: 32.85603864688551 - type: nauc_ndcg_at_5_max value: 17.05358609428122 - type: nauc_ndcg_at_5_std value: -6.376667913153048 - type: nauc_precision_at_1000_diff1 value: 11.468656684649677 - type: nauc_precision_at_1000_max value: 15.320322507806294 - type: nauc_precision_at_1000_std value: 16.669904386742214 - type: nauc_precision_at_100_diff1 value: 17.31311828660998 - type: nauc_precision_at_100_max value: 17.18604042044477 - type: nauc_precision_at_100_std value: 6.921989479762083 - type: nauc_precision_at_10_diff1 value: 24.341600277154242 - type: nauc_precision_at_10_max value: 18.290595240997305 - type: nauc_precision_at_10_std value: -3.249248531480952 - type: nauc_precision_at_1_diff1 value: 42.624124463773974 - type: nauc_precision_at_1_max value: 19.644592703779377 - type: nauc_precision_at_1_std value: -6.847467406875957 - type: nauc_precision_at_20_diff1 value: 19.67933630715089 - type: nauc_precision_at_20_max value: 17.708788971071886 - type: nauc_precision_at_20_std value: -1.698058343596388 - type: nauc_precision_at_3_diff1 value: 32.56407923967103 - type: nauc_precision_at_3_max value: 20.008945086974204 - type: nauc_precision_at_3_std value: -5.700587196952845 - type: nauc_precision_at_5_diff1 value: 28.910777719175375 - type: nauc_precision_at_5_max value: 19.181013952415274 - type: nauc_precision_at_5_std value: -5.09856965471284 - type: nauc_recall_at_1000_diff1 value: 12.396394270885589 - type: nauc_recall_at_1000_max value: 8.239418701743709 - type: nauc_recall_at_1000_std value: 15.546192718064672 - type: nauc_recall_at_100_diff1 value: 15.657113708258077 - type: nauc_recall_at_100_max value: 9.7558897450188 - type: nauc_recall_at_100_std value: 3.7828006481678327 - type: nauc_recall_at_10_diff1 value: 23.540703764594824 - type: nauc_recall_at_10_max value: 12.514108862838025 - type: nauc_recall_at_10_std value: -4.890712777213581 - type: nauc_recall_at_1_diff1 value: 42.13696871713339 - type: nauc_recall_at_1_max value: 17.632090262590623 - type: nauc_recall_at_1_std value: -7.011301507001842 - type: nauc_recall_at_20_diff1 value: 18.632795869246763 - type: nauc_recall_at_20_max value: 10.781667052463174 - type: nauc_recall_at_20_std value: -3.3062758301873467 - type: nauc_recall_at_3_diff1 value: 29.84753634947647 - type: nauc_recall_at_3_max value: 15.743144468924344 - type: nauc_recall_at_3_std value: -6.214675269831871 - type: nauc_recall_at_5_diff1 value: 26.80447414490652 - type: nauc_recall_at_5_max value: 14.403515700429177 - type: nauc_recall_at_5_std value: -6.259205870944759 - type: ndcg_at_1 value: 8.121 - type: ndcg_at_10 value: 11.503 - type: ndcg_at_100 value: 14.951 - type: ndcg_at_1000 value: 18.196 - type: ndcg_at_20 value: 12.614 - type: ndcg_at_3 value: 9.743 - type: ndcg_at_5 value: 10.435 - type: precision_at_1 value: 8.121 - type: precision_at_10 value: 2.168 - type: precision_at_100 value: 0.468 - type: precision_at_1000 value: 0.089 - type: precision_at_20 value: 1.383 - type: precision_at_3 value: 4.6690000000000005 - type: precision_at_5 value: 3.345 - type: recall_at_1 value: 6.542000000000001 - type: recall_at_10 value: 15.794 - type: recall_at_100 value: 32.031 - type: recall_at_1000 value: 56.263 - type: recall_at_20 value: 20.023 - type: recall_at_3 value: 10.791 - type: recall_at_5 value: 12.61 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval (default) type: mteb/cqadupstack-unix config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: main_score value: 18.752 - type: map_at_1 value: 12.076 - type: map_at_10 value: 15.886 - type: map_at_100 value: 16.525000000000002 - type: map_at_1000 value: 16.628 - type: map_at_20 value: 16.150000000000002 - type: map_at_3 value: 14.637 - type: map_at_5 value: 15.265999999999998 - type: mrr_at_1 value: 14.458955223880595 - type: mrr_at_10 value: 18.78960850509357 - type: mrr_at_100 value: 19.457515825168713 - type: mrr_at_1000 value: 19.544411963686347 - type: mrr_at_20 value: 19.069352610955498 - type: mrr_at_3 value: 17.50621890547264 - type: mrr_at_5 value: 18.135883084577117 - type: nauc_map_at_1000_diff1 value: 35.26453666091026 - type: nauc_map_at_1000_max value: 28.45949873807009 - type: nauc_map_at_1000_std value: -3.4139786458650603 - type: nauc_map_at_100_diff1 value: 35.26758793761312 - type: nauc_map_at_100_max value: 28.427395341056673 - type: nauc_map_at_100_std value: -3.494357914459209 - type: nauc_map_at_10_diff1 value: 35.748030827297846 - type: nauc_map_at_10_max value: 28.709693519088635 - type: nauc_map_at_10_std value: -4.0888030664931545 - type: nauc_map_at_1_diff1 value: 41.858308280129286 - type: nauc_map_at_1_max value: 29.59713822513886 - type: nauc_map_at_1_std value: -5.112958479444919 - type: nauc_map_at_20_diff1 value: 35.53257258132197 - type: nauc_map_at_20_max value: 28.65465491465789 - type: nauc_map_at_20_std value: -3.844442722241712 - type: nauc_map_at_3_diff1 value: 36.65786183200192 - type: nauc_map_at_3_max value: 28.80283494555713 - type: nauc_map_at_3_std value: -3.956759027099864 - type: nauc_map_at_5_diff1 value: 36.45785727569078 - type: nauc_map_at_5_max value: 28.987265101067706 - type: nauc_map_at_5_std value: -3.8836573002904364 - type: nauc_mrr_at_1000_diff1 value: 33.15170628844491 - type: nauc_mrr_at_1000_max value: 29.80316660586958 - type: nauc_mrr_at_1000_std value: -2.919368628674066 - type: nauc_mrr_at_100_diff1 value: 33.149497124475005 - type: nauc_mrr_at_100_max value: 29.791578160522104 - type: nauc_mrr_at_100_std value: -2.9631398714502812 - type: nauc_mrr_at_10_diff1 value: 33.55199061618286 - type: nauc_mrr_at_10_max value: 30.069009995703794 - type: nauc_mrr_at_10_std value: -3.6083857944611797 - type: nauc_mrr_at_1_diff1 value: 40.186482910894526 - type: nauc_mrr_at_1_max value: 32.037574024173274 - type: nauc_mrr_at_1_std value: -3.9185583280706497 - type: nauc_mrr_at_20_diff1 value: 33.29736140197984 - type: nauc_mrr_at_20_max value: 29.987219611017764 - type: nauc_mrr_at_20_std value: -3.2911243316613477 - type: nauc_mrr_at_3_diff1 value: 34.59766570016104 - type: nauc_mrr_at_3_max value: 30.548093957699834 - type: nauc_mrr_at_3_std value: -3.548724979573667 - type: nauc_mrr_at_5_diff1 value: 34.18658889496389 - type: nauc_mrr_at_5_max value: 30.41947286010115 - type: nauc_mrr_at_5_std value: -3.43375074675157 - type: nauc_ndcg_at_1000_diff1 value: 30.49383193075413 - type: nauc_ndcg_at_1000_max value: 26.437945296729847 - type: nauc_ndcg_at_1000_std value: 0.713575479477255 - type: nauc_ndcg_at_100_diff1 value: 30.39984801831684 - type: nauc_ndcg_at_100_max value: 26.05310862803912 - type: nauc_ndcg_at_100_std value: -0.9969079892996344 - type: nauc_ndcg_at_10_diff1 value: 32.67867574566094 - type: nauc_ndcg_at_10_max value: 28.071536866518898 - type: nauc_ndcg_at_10_std value: -4.0839672791072035 - type: nauc_ndcg_at_1_diff1 value: 40.186482910894526 - type: nauc_ndcg_at_1_max value: 32.037574024173274 - type: nauc_ndcg_at_1_std value: -3.9185583280706497 - type: nauc_ndcg_at_20_diff1 value: 31.87681672318583 - type: nauc_ndcg_at_20_max value: 27.757429962292935 - type: nauc_ndcg_at_20_std value: -3.289181709637281 - type: nauc_ndcg_at_3_diff1 value: 34.496401264219436 - type: nauc_ndcg_at_3_max value: 29.14164273814545 - type: nauc_ndcg_at_3_std value: -3.6284439880158454 - type: nauc_ndcg_at_5_diff1 value: 34.246766411944606 - type: nauc_ndcg_at_5_max value: 28.94897772325865 - type: nauc_ndcg_at_5_std value: -3.55118261356311 - type: nauc_precision_at_1000_diff1 value: 5.378065708185438 - type: nauc_precision_at_1000_max value: 13.48764762389057 - type: nauc_precision_at_1000_std value: 18.691426967517767 - type: nauc_precision_at_100_diff1 value: 13.43482265345938 - type: nauc_precision_at_100_max value: 18.365831924084738 - type: nauc_precision_at_100_std value: 9.235798636518911 - type: nauc_precision_at_10_diff1 value: 22.83462539079133 - type: nauc_precision_at_10_max value: 28.88737216224709 - type: nauc_precision_at_10_std value: -3.6618498163720496 - type: nauc_precision_at_1_diff1 value: 40.186482910894526 - type: nauc_precision_at_1_max value: 32.037574024173274 - type: nauc_precision_at_1_std value: -3.9185583280706497 - type: nauc_precision_at_20_diff1 value: 20.85661718188355 - type: nauc_precision_at_20_max value: 27.64527011746391 - type: nauc_precision_at_20_std value: -0.6120961992383614 - type: nauc_precision_at_3_diff1 value: 28.964157983970857 - type: nauc_precision_at_3_max value: 29.400327308652884 - type: nauc_precision_at_3_std value: -3.1499697700355336 - type: nauc_precision_at_5_diff1 value: 27.504587117367418 - type: nauc_precision_at_5_max value: 30.07226208448269 - type: nauc_precision_at_5_std value: -2.349913933244111 - type: nauc_recall_at_1000_diff1 value: 15.55962119542935 - type: nauc_recall_at_1000_max value: 14.319938855591138 - type: nauc_recall_at_1000_std value: 17.755185961944168 - type: nauc_recall_at_100_diff1 value: 17.13835133172289 - type: nauc_recall_at_100_max value: 14.963855394840023 - type: nauc_recall_at_100_std value: 6.03739710571083 - type: nauc_recall_at_10_diff1 value: 25.825685913064444 - type: nauc_recall_at_10_max value: 23.892438517711863 - type: nauc_recall_at_10_std value: -4.618370778838095 - type: nauc_recall_at_1_diff1 value: 41.858308280129286 - type: nauc_recall_at_1_max value: 29.59713822513886 - type: nauc_recall_at_1_std value: -5.112958479444919 - type: nauc_recall_at_20_diff1 value: 23.270446548799935 - type: nauc_recall_at_20_max value: 22.676377474931055 - type: nauc_recall_at_20_std value: -2.4631378318557635 - type: nauc_recall_at_3_diff1 value: 31.100368984587128 - type: nauc_recall_at_3_max value: 27.09922934111932 - type: nauc_recall_at_3_std value: -3.1714853286064946 - type: nauc_recall_at_5_diff1 value: 29.82135009500676 - type: nauc_recall_at_5_max value: 26.424051798244985 - type: nauc_recall_at_5_std value: -2.966236526459052 - type: ndcg_at_1 value: 14.459 - type: ndcg_at_10 value: 18.752 - type: ndcg_at_100 value: 22.488 - type: ndcg_at_1000 value: 25.463 - type: ndcg_at_20 value: 19.703 - type: ndcg_at_3 value: 16.317 - type: ndcg_at_5 value: 17.267 - type: precision_at_1 value: 14.459 - type: precision_at_10 value: 3.1530000000000005 - type: precision_at_100 value: 0.567 - type: precision_at_1000 value: 0.091 - type: precision_at_20 value: 1.8190000000000002 - type: precision_at_3 value: 7.369000000000001 - type: precision_at_5 value: 5.131 - type: recall_at_1 value: 12.076 - type: recall_at_10 value: 24.901999999999997 - type: recall_at_100 value: 42.535000000000004 - type: recall_at_1000 value: 64.786 - type: recall_at_20 value: 28.42 - type: recall_at_3 value: 17.871000000000002 - type: recall_at_5 value: 20.328 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval (default) type: mteb/cqadupstack-webmasters config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: main_score value: 21.488 - type: map_at_1 value: 13.569999999999999 - type: map_at_10 value: 18.184 - type: map_at_100 value: 19.151 - type: map_at_1000 value: 19.331 - type: map_at_20 value: 18.619 - type: map_at_3 value: 16.666 - type: map_at_5 value: 17.73 - type: mrr_at_1 value: 17.193675889328063 - type: mrr_at_10 value: 21.833082376560643 - type: mrr_at_100 value: 22.67117038809971 - type: mrr_at_1000 value: 22.76433404351483 - type: mrr_at_20 value: 22.20200942617089 - type: mrr_at_3 value: 20.52042160737813 - type: mrr_at_5 value: 21.380105401844535 - type: nauc_map_at_1000_diff1 value: 35.49958838022679 - type: nauc_map_at_1000_max value: 27.74062097598903 - type: nauc_map_at_1000_std value: -10.515093354385309 - type: nauc_map_at_100_diff1 value: 35.56722100038519 - type: nauc_map_at_100_max value: 27.827605374816354 - type: nauc_map_at_100_std value: -10.631512595972834 - type: nauc_map_at_10_diff1 value: 35.91616127603119 - type: nauc_map_at_10_max value: 28.165439663736507 - type: nauc_map_at_10_std value: -11.08789520401649 - type: nauc_map_at_1_diff1 value: 43.19178740943906 - type: nauc_map_at_1_max value: 30.877102640311726 - type: nauc_map_at_1_std value: -14.165080939187726 - type: nauc_map_at_20_diff1 value: 35.79766863342843 - type: nauc_map_at_20_max value: 28.059404735661243 - type: nauc_map_at_20_std value: -11.072321333753566 - type: nauc_map_at_3_diff1 value: 37.897605640025475 - type: nauc_map_at_3_max value: 28.177172477006117 - type: nauc_map_at_3_std value: -12.136111183330279 - type: nauc_map_at_5_diff1 value: 36.44434777898687 - type: nauc_map_at_5_max value: 28.438512971898394 - type: nauc_map_at_5_std value: -10.926696695866928 - type: nauc_mrr_at_1000_diff1 value: 36.13714281845032 - type: nauc_mrr_at_1000_max value: 26.282536844730803 - type: nauc_mrr_at_1000_std value: -9.856391084807372 - type: nauc_mrr_at_100_diff1 value: 36.11260358526963 - type: nauc_mrr_at_100_max value: 26.251055434341158 - type: nauc_mrr_at_100_std value: -9.866249832625387 - type: nauc_mrr_at_10_diff1 value: 36.39768434891786 - type: nauc_mrr_at_10_max value: 26.369874684734597 - type: nauc_mrr_at_10_std value: -10.140677127064409 - type: nauc_mrr_at_1_diff1 value: 43.97681003969528 - type: nauc_mrr_at_1_max value: 29.836613510418573 - type: nauc_mrr_at_1_std value: -13.729257304690295 - type: nauc_mrr_at_20_diff1 value: 36.2936027454046 - type: nauc_mrr_at_20_max value: 26.312955186456488 - type: nauc_mrr_at_20_std value: -10.177068130665152 - type: nauc_mrr_at_3_diff1 value: 38.01813544163268 - type: nauc_mrr_at_3_max value: 26.450298271894578 - type: nauc_mrr_at_3_std value: -10.606258695223955 - type: nauc_mrr_at_5_diff1 value: 36.66139719774965 - type: nauc_mrr_at_5_max value: 26.509309350284294 - type: nauc_mrr_at_5_std value: -9.947243479271682 - type: nauc_ndcg_at_1000_diff1 value: 31.791493593552133 - type: nauc_ndcg_at_1000_max value: 25.324361418674858 - type: nauc_ndcg_at_1000_std value: -6.7443196116990425 - type: nauc_ndcg_at_100_diff1 value: 31.54953518236872 - type: nauc_ndcg_at_100_max value: 25.188716359357414 - type: nauc_ndcg_at_100_std value: -6.839894709820292 - type: nauc_ndcg_at_10_diff1 value: 33.098147949306394 - type: nauc_ndcg_at_10_max value: 25.405004571973617 - type: nauc_ndcg_at_10_std value: -9.445873172910993 - type: nauc_ndcg_at_1_diff1 value: 43.97681003969528 - type: nauc_ndcg_at_1_max value: 29.836613510418573 - type: nauc_ndcg_at_1_std value: -13.729257304690295 - type: nauc_ndcg_at_20_diff1 value: 32.92224490482159 - type: nauc_ndcg_at_20_max value: 25.547859604065703 - type: nauc_ndcg_at_20_std value: -9.241908708414929 - type: nauc_ndcg_at_3_diff1 value: 36.53902441073446 - type: nauc_ndcg_at_3_max value: 25.133819114707258 - type: nauc_ndcg_at_3_std value: -10.692158418093511 - type: nauc_ndcg_at_5_diff1 value: 33.95545160989453 - type: nauc_ndcg_at_5_max value: 25.718632036099127 - type: nauc_ndcg_at_5_std value: -9.232699386322327 - type: nauc_precision_at_1000_diff1 value: 0.7176996575689929 - type: nauc_precision_at_1000_max value: -6.206679830059766 - type: nauc_precision_at_1000_std value: 15.194409401229048 - type: nauc_precision_at_100_diff1 value: 6.0746313447861455 - type: nauc_precision_at_100_max value: 1.8294518479685982 - type: nauc_precision_at_100_std value: 8.37195469826675 - type: nauc_precision_at_10_diff1 value: 20.73981815339893 - type: nauc_precision_at_10_max value: 15.478261828007453 - type: nauc_precision_at_10_std value: -5.5561745194715275 - type: nauc_precision_at_1_diff1 value: 43.97681003969528 - type: nauc_precision_at_1_max value: 29.836613510418573 - type: nauc_precision_at_1_std value: -13.729257304690295 - type: nauc_precision_at_20_diff1 value: 19.796357243134437 - type: nauc_precision_at_20_max value: 14.737729170595262 - type: nauc_precision_at_20_std value: -1.9384122215911435 - type: nauc_precision_at_3_diff1 value: 31.865572834643885 - type: nauc_precision_at_3_max value: 20.374070383077616 - type: nauc_precision_at_3_std value: -8.278156186226331 - type: nauc_precision_at_5_diff1 value: 24.892982796410482 - type: nauc_precision_at_5_max value: 18.471691298099184 - type: nauc_precision_at_5_std value: -5.556018739034546 - type: nauc_recall_at_1000_diff1 value: 13.11384429793443 - type: nauc_recall_at_1000_max value: 14.1557785679994 - type: nauc_recall_at_1000_std value: 9.786662648320794 - type: nauc_recall_at_100_diff1 value: 18.975726964682863 - type: nauc_recall_at_100_max value: 17.463053263913643 - type: nauc_recall_at_100_std value: 5.193025295117909 - type: nauc_recall_at_10_diff1 value: 26.179450874152614 - type: nauc_recall_at_10_max value: 21.634335314260436 - type: nauc_recall_at_10_std value: -5.718314080956008 - type: nauc_recall_at_1_diff1 value: 43.19178740943906 - type: nauc_recall_at_1_max value: 30.877102640311726 - type: nauc_recall_at_1_std value: -14.165080939187726 - type: nauc_recall_at_20_diff1 value: 25.087605827678395 - type: nauc_recall_at_20_max value: 20.130863094684713 - type: nauc_recall_at_20_std value: -5.62005732659447 - type: nauc_recall_at_3_diff1 value: 32.74815068110827 - type: nauc_recall_at_3_max value: 22.403658999564968 - type: nauc_recall_at_3_std value: -8.683387701904735 - type: nauc_recall_at_5_diff1 value: 27.755340185938906 - type: nauc_recall_at_5_max value: 23.586435487805275 - type: nauc_recall_at_5_std value: -5.135301791301631 - type: ndcg_at_1 value: 17.194000000000003 - type: ndcg_at_10 value: 21.488 - type: ndcg_at_100 value: 26.150000000000002 - type: ndcg_at_1000 value: 29.805999999999997 - type: ndcg_at_20 value: 22.718 - type: ndcg_at_3 value: 19.434 - type: ndcg_at_5 value: 20.746000000000002 - type: precision_at_1 value: 17.194000000000003 - type: precision_at_10 value: 4.091 - type: precision_at_100 value: 0.931 - type: precision_at_1000 value: 0.18 - type: precision_at_20 value: 2.54 - type: precision_at_3 value: 9.354 - type: precision_at_5 value: 6.877 - type: recall_at_1 value: 13.569999999999999 - type: recall_at_10 value: 26.634999999999998 - type: recall_at_100 value: 49.457 - type: recall_at_1000 value: 74.978 - type: recall_at_20 value: 31.830000000000002 - type: recall_at_3 value: 20.014000000000003 - type: recall_at_5 value: 23.915 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval (default) type: mteb/cqadupstack-wordpress config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: main_score value: 14.286999999999999 - type: map_at_1 value: 7.8 - type: map_at_10 value: 11.603 - type: map_at_100 value: 12.322 - type: map_at_1000 value: 12.424 - type: map_at_20 value: 11.917 - type: map_at_3 value: 10.241999999999999 - type: map_at_5 value: 10.894 - type: mrr_at_1 value: 8.687615526802219 - type: mrr_at_10 value: 12.827509315494535 - type: mrr_at_100 value: 13.569825117763369 - type: mrr_at_1000 value: 13.664616620933204 - type: mrr_at_20 value: 13.153434876243523 - type: mrr_at_3 value: 11.367837338262479 - type: mrr_at_5 value: 12.060998151571168 - type: nauc_map_at_1000_diff1 value: 21.953862709034876 - type: nauc_map_at_1000_max value: 29.066372403463188 - type: nauc_map_at_1000_std value: -7.250987758385709 - type: nauc_map_at_100_diff1 value: 21.93592696083288 - type: nauc_map_at_100_max value: 29.045471554920262 - type: nauc_map_at_100_std value: -7.347433609703373 - type: nauc_map_at_10_diff1 value: 22.272278874310526 - type: nauc_map_at_10_max value: 29.620096522232625 - type: nauc_map_at_10_std value: -7.56004907693945 - type: nauc_map_at_1_diff1 value: 29.70146011799996 - type: nauc_map_at_1_max value: 33.6582002068041 - type: nauc_map_at_1_std value: -11.43320242844524 - type: nauc_map_at_20_diff1 value: 22.06594846110943 - type: nauc_map_at_20_max value: 29.4352137076757 - type: nauc_map_at_20_std value: -7.640434271085226 - type: nauc_map_at_3_diff1 value: 23.260962069088908 - type: nauc_map_at_3_max value: 29.85851009040783 - type: nauc_map_at_3_std value: -8.493416631968287 - type: nauc_map_at_5_diff1 value: 21.67294210722253 - type: nauc_map_at_5_max value: 30.00826915229784 - type: nauc_map_at_5_std value: -8.443622415442166 - type: nauc_mrr_at_1000_diff1 value: 22.104239631860946 - type: nauc_mrr_at_1000_max value: 28.258201262169408 - type: nauc_mrr_at_1000_std value: -6.622347594933508 - type: nauc_mrr_at_100_diff1 value: 22.098536010618822 - type: nauc_mrr_at_100_max value: 28.220245799295107 - type: nauc_mrr_at_100_std value: -6.675059636819916 - type: nauc_mrr_at_10_diff1 value: 22.63401956823091 - type: nauc_mrr_at_10_max value: 28.626927108349953 - type: nauc_mrr_at_10_std value: -6.820539359416205 - type: nauc_mrr_at_1_diff1 value: 30.188275726076373 - type: nauc_mrr_at_1_max value: 32.97489523305523 - type: nauc_mrr_at_1_std value: -10.419791276142904 - type: nauc_mrr_at_20_diff1 value: 22.125155778128224 - type: nauc_mrr_at_20_max value: 28.54628678699734 - type: nauc_mrr_at_20_std value: -6.940802668158878 - type: nauc_mrr_at_3_diff1 value: 23.20363757655989 - type: nauc_mrr_at_3_max value: 28.72037838694496 - type: nauc_mrr_at_3_std value: -7.863052941940037 - type: nauc_mrr_at_5_diff1 value: 21.769709814351764 - type: nauc_mrr_at_5_max value: 29.01182865041742 - type: nauc_mrr_at_5_std value: -7.823698429495608 - type: nauc_ndcg_at_1000_diff1 value: 18.839399965777904 - type: nauc_ndcg_at_1000_max value: 26.409685169340147 - type: nauc_ndcg_at_1000_std value: -2.75323598669575 - type: nauc_ndcg_at_100_diff1 value: 18.980282228228756 - type: nauc_ndcg_at_100_max value: 25.888915953926944 - type: nauc_ndcg_at_100_std value: -4.247963667020685 - type: nauc_ndcg_at_10_diff1 value: 20.268021320985767 - type: nauc_ndcg_at_10_max value: 28.007422388366308 - type: nauc_ndcg_at_10_std value: -6.035880880912193 - type: nauc_ndcg_at_1_diff1 value: 30.188275726076373 - type: nauc_ndcg_at_1_max value: 32.97489523305523 - type: nauc_ndcg_at_1_std value: -10.419791276142904 - type: nauc_ndcg_at_20_diff1 value: 19.475382543592772 - type: nauc_ndcg_at_20_max value: 27.783688816814124 - type: nauc_ndcg_at_20_std value: -6.375668645265656 - type: nauc_ndcg_at_3_diff1 value: 21.17886661176787 - type: nauc_ndcg_at_3_max value: 28.281440509906492 - type: nauc_ndcg_at_3_std value: -7.544056618031584 - type: nauc_ndcg_at_5_diff1 value: 18.58832973791431 - type: nauc_ndcg_at_5_max value: 28.724509771603614 - type: nauc_ndcg_at_5_std value: -7.783318230914177 - type: nauc_precision_at_1000_diff1 value: 7.129904674618118 - type: nauc_precision_at_1000_max value: 7.635578876601942 - type: nauc_precision_at_1000_std value: 9.846306597273538 - type: nauc_precision_at_100_diff1 value: 11.813398381635091 - type: nauc_precision_at_100_max value: 16.32313056743183 - type: nauc_precision_at_100_std value: 4.336689858200671 - type: nauc_precision_at_10_diff1 value: 17.446504784777808 - type: nauc_precision_at_10_max value: 25.408869205476464 - type: nauc_precision_at_10_std value: -1.6572908083948488 - type: nauc_precision_at_1_diff1 value: 30.188275726076373 - type: nauc_precision_at_1_max value: 32.97489523305523 - type: nauc_precision_at_1_std value: -10.419791276142904 - type: nauc_precision_at_20_diff1 value: 14.91677316093746 - type: nauc_precision_at_20_max value: 24.32645869103317 - type: nauc_precision_at_20_std value: -2.9225394914435876 - type: nauc_precision_at_3_diff1 value: 16.841177267297603 - type: nauc_precision_at_3_max value: 24.81824344898353 - type: nauc_precision_at_3_std value: -6.548456214157852 - type: nauc_precision_at_5_diff1 value: 12.601361749535691 - type: nauc_precision_at_5_max value: 25.662845341554753 - type: nauc_precision_at_5_std value: -5.257813050604554 - type: nauc_recall_at_1000_diff1 value: 9.330142559611428 - type: nauc_recall_at_1000_max value: 19.55092125312593 - type: nauc_recall_at_1000_std value: 12.833888019795856 - type: nauc_recall_at_100_diff1 value: 12.93335051943625 - type: nauc_recall_at_100_max value: 18.554303580780303 - type: nauc_recall_at_100_std value: 2.904381331543482 - type: nauc_recall_at_10_diff1 value: 15.945414878900973 - type: nauc_recall_at_10_max value: 24.45894683906371 - type: nauc_recall_at_10_std value: -3.3285107959242257 - type: nauc_recall_at_1_diff1 value: 29.70146011799996 - type: nauc_recall_at_1_max value: 33.6582002068041 - type: nauc_recall_at_1_std value: -11.43320242844524 - type: nauc_recall_at_20_diff1 value: 14.54592581450925 - type: nauc_recall_at_20_max value: 24.62940289531727 - type: nauc_recall_at_20_std value: -4.525466630360646 - type: nauc_recall_at_3_diff1 value: 15.585536477830441 - type: nauc_recall_at_3_max value: 25.217020737509433 - type: nauc_recall_at_3_std value: -6.386554399226418 - type: nauc_recall_at_5_diff1 value: 11.641604418059668 - type: nauc_recall_at_5_max value: 26.263641139012208 - type: nauc_recall_at_5_std value: -6.77257050164422 - type: ndcg_at_1 value: 8.688 - type: ndcg_at_10 value: 14.286999999999999 - type: ndcg_at_100 value: 18.516 - type: ndcg_at_1000 value: 21.708 - type: ndcg_at_20 value: 15.436 - type: ndcg_at_3 value: 11.376999999999999 - type: ndcg_at_5 value: 12.551000000000002 - type: precision_at_1 value: 8.688 - type: precision_at_10 value: 2.458 - type: precision_at_100 value: 0.505 - type: precision_at_1000 value: 0.084 - type: precision_at_20 value: 1.534 - type: precision_at_3 value: 5.0520000000000005 - type: precision_at_5 value: 3.697 - type: recall_at_1 value: 7.8 - type: recall_at_10 value: 21.59 - type: recall_at_100 value: 42.101 - type: recall_at_1000 value: 67.259 - type: recall_at_20 value: 25.858999999999998 - type: recall_at_3 value: 13.506000000000002 - type: recall_at_5 value: 16.408 - task: type: Retrieval dataset: name: MTEB ClimateFEVER (default) type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: main_score value: 12.91 - type: map_at_1 value: 5.244999999999999 - type: map_at_10 value: 8.533 - type: map_at_100 value: 9.562 - type: map_at_1000 value: 9.701 - type: map_at_20 value: 9.061 - type: map_at_3 value: 7.117 - type: map_at_5 value: 7.747999999999999 - type: mrr_at_1 value: 11.530944625407166 - type: mrr_at_10 value: 17.86644951140064 - type: mrr_at_100 value: 18.874326110051832 - type: mrr_at_1000 value: 18.94561511680038 - type: mrr_at_20 value: 18.47706797129705 - type: mrr_at_3 value: 15.27687296416939 - type: mrr_at_5 value: 16.576547231270354 - type: nauc_map_at_1000_diff1 value: 24.54420825290521 - type: nauc_map_at_1000_max value: 3.897483834465137 - type: nauc_map_at_1000_std value: 19.481805113255135 - type: nauc_map_at_100_diff1 value: 24.55555745351147 - type: nauc_map_at_100_max value: 3.837582687861127 - type: nauc_map_at_100_std value: 19.133723602277477 - type: nauc_map_at_10_diff1 value: 25.265812103632264 - type: nauc_map_at_10_max value: 3.8492593876156564 - type: nauc_map_at_10_std value: 16.506599027024237 - type: nauc_map_at_1_diff1 value: 33.94610398172728 - type: nauc_map_at_1_max value: 1.6496908677205668 - type: nauc_map_at_1_std value: 13.419972442438885 - type: nauc_map_at_20_diff1 value: 24.72824633420426 - type: nauc_map_at_20_max value: 3.783475878999571 - type: nauc_map_at_20_std value: 17.84509170410431 - type: nauc_map_at_3_diff1 value: 26.956755375738854 - type: nauc_map_at_3_max value: 3.9095753462098775 - type: nauc_map_at_3_std value: 14.346199792189863 - type: nauc_map_at_5_diff1 value: 26.151346472806736 - type: nauc_map_at_5_max value: 3.6340429832669017 - type: nauc_map_at_5_std value: 14.297502705786602 - type: nauc_mrr_at_1000_diff1 value: 23.268773463692998 - type: nauc_mrr_at_1000_max value: 6.109347662338191 - type: nauc_mrr_at_1000_std value: 19.22652674727219 - type: nauc_mrr_at_100_diff1 value: 23.269924125626535 - type: nauc_mrr_at_100_max value: 6.120703236947665 - type: nauc_mrr_at_100_std value: 19.2163581654434 - type: nauc_mrr_at_10_diff1 value: 23.52516707186784 - type: nauc_mrr_at_10_max value: 6.237783397862627 - type: nauc_mrr_at_10_std value: 18.18627288507101 - type: nauc_mrr_at_1_diff1 value: 27.584994677292034 - type: nauc_mrr_at_1_max value: 3.822817171895031 - type: nauc_mrr_at_1_std value: 13.580944806885068 - type: nauc_mrr_at_20_diff1 value: 23.18466877243556 - type: nauc_mrr_at_20_max value: 6.071619184172904 - type: nauc_mrr_at_20_std value: 18.860252064577328 - type: nauc_mrr_at_3_diff1 value: 24.39357898054709 - type: nauc_mrr_at_3_max value: 6.496455479357357 - type: nauc_mrr_at_3_std value: 16.58571208649782 - type: nauc_mrr_at_5_diff1 value: 23.789967014710673 - type: nauc_mrr_at_5_max value: 6.741427679039848 - type: nauc_mrr_at_5_std value: 16.87086607963999 - type: nauc_ndcg_at_1000_diff1 value: 21.749820902072695 - type: nauc_ndcg_at_1000_max value: 4.86812498810872 - type: nauc_ndcg_at_1000_std value: 31.235098248353726 - type: nauc_ndcg_at_100_diff1 value: 21.19681101249399 - type: nauc_ndcg_at_100_max value: 4.6861370875702395 - type: nauc_ndcg_at_100_std value: 27.272107521053297 - type: nauc_ndcg_at_10_diff1 value: 22.773032212350426 - type: nauc_ndcg_at_10_max value: 4.9873425228251955 - type: nauc_ndcg_at_10_std value: 19.5435742476801 - type: nauc_ndcg_at_1_diff1 value: 27.584994677292034 - type: nauc_ndcg_at_1_max value: 3.822817171895031 - type: nauc_ndcg_at_1_std value: 13.580944806885068 - type: nauc_ndcg_at_20_diff1 value: 21.438732145979834 - type: nauc_ndcg_at_20_max value: 4.6005835605739245 - type: nauc_ndcg_at_20_std value: 22.65431596849159 - type: nauc_ndcg_at_3_diff1 value: 24.490757645118904 - type: nauc_ndcg_at_3_max value: 5.962800738138971 - type: nauc_ndcg_at_3_std value: 16.307824488006986 - type: nauc_ndcg_at_5_diff1 value: 23.993915092342622 - type: nauc_ndcg_at_5_max value: 5.236363764316798 - type: nauc_ndcg_at_5_std value: 15.82938355562257 - type: nauc_precision_at_1000_diff1 value: 11.131036670513076 - type: nauc_precision_at_1000_max value: 6.822816660809858 - type: nauc_precision_at_1000_std value: 46.914426444389676 - type: nauc_precision_at_100_diff1 value: 10.955370605222562 - type: nauc_precision_at_100_max value: 7.306594130327962 - type: nauc_precision_at_100_std value: 40.6149528086222 - type: nauc_precision_at_10_diff1 value: 14.798768173392961 - type: nauc_precision_at_10_max value: 8.747564896420851 - type: nauc_precision_at_10_std value: 27.017329972663518 - type: nauc_precision_at_1_diff1 value: 27.584994677292034 - type: nauc_precision_at_1_max value: 3.822817171895031 - type: nauc_precision_at_1_std value: 13.580944806885068 - type: nauc_precision_at_20_diff1 value: 11.832837907912124 - type: nauc_precision_at_20_max value: 7.84405782779581 - type: nauc_precision_at_20_std value: 31.71828414369358 - type: nauc_precision_at_3_diff1 value: 18.994037151223843 - type: nauc_precision_at_3_max value: 9.590257745908866 - type: nauc_precision_at_3_std value: 19.0108385933672 - type: nauc_precision_at_5_diff1 value: 16.84707712963686 - type: nauc_precision_at_5_max value: 10.064344353606588 - type: nauc_precision_at_5_std value: 19.57545659630027 - type: nauc_recall_at_1000_diff1 value: 13.874751583251479 - type: nauc_recall_at_1000_max value: 1.530199910786395 - type: nauc_recall_at_1000_std value: 46.27128687120432 - type: nauc_recall_at_100_diff1 value: 13.1528347324774 - type: nauc_recall_at_100_max value: 1.9375434916868963 - type: nauc_recall_at_100_std value: 34.88493356061696 - type: nauc_recall_at_10_diff1 value: 18.04034405954142 - type: nauc_recall_at_10_max value: 3.705815311091777 - type: nauc_recall_at_10_std value: 21.901312599161166 - type: nauc_recall_at_1_diff1 value: 33.94610398172728 - type: nauc_recall_at_1_max value: 1.6496908677205668 - type: nauc_recall_at_1_std value: 13.419972442438885 - type: nauc_recall_at_20_diff1 value: 14.202376007797774 - type: nauc_recall_at_20_max value: 2.2147147149777644 - type: nauc_recall_at_20_std value: 27.12814167677131 - type: nauc_recall_at_3_diff1 value: 22.921929014221593 - type: nauc_recall_at_3_max value: 5.495801553489075 - type: nauc_recall_at_3_std value: 16.34255997562194 - type: nauc_recall_at_5_diff1 value: 20.706978570804146 - type: nauc_recall_at_5_max value: 4.397716927561929 - type: nauc_recall_at_5_std value: 15.316487242353569 - type: ndcg_at_1 value: 11.530999999999999 - type: ndcg_at_10 value: 12.91 - type: ndcg_at_100 value: 17.926000000000002 - type: ndcg_at_1000 value: 21.165 - type: ndcg_at_20 value: 14.793000000000001 - type: ndcg_at_3 value: 9.953 - type: ndcg_at_5 value: 10.847999999999999 - type: precision_at_1 value: 11.530999999999999 - type: precision_at_10 value: 4.247999999999999 - type: precision_at_100 value: 0.943 - type: precision_at_1000 value: 0.154 - type: precision_at_20 value: 2.902 - type: precision_at_3 value: 7.4270000000000005 - type: precision_at_5 value: 5.811 - type: recall_at_1 value: 5.244999999999999 - type: recall_at_10 value: 16.317999999999998 - type: recall_at_100 value: 34.201 - type: recall_at_1000 value: 53.069 - type: recall_at_20 value: 21.808 - type: recall_at_3 value: 9.167 - type: recall_at_5 value: 11.605 - task: type: Retrieval dataset: name: MTEB DBPedia (default) type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: main_score value: 17.809 - type: map_at_1 value: 2.9080000000000004 - type: map_at_10 value: 6.72 - type: map_at_100 value: 9.452 - type: map_at_1000 value: 10.141 - type: map_at_20 value: 7.775 - type: map_at_3 value: 4.838 - type: map_at_5 value: 5.595 - type: mrr_at_1 value: 33.25 - type: mrr_at_10 value: 43.10208333333334 - type: mrr_at_100 value: 43.91155190635367 - type: mrr_at_1000 value: 43.942081922491234 - type: mrr_at_20 value: 43.53115904133708 - type: mrr_at_3 value: 40.37499999999999 - type: mrr_at_5 value: 41.937500000000014 - type: nauc_map_at_1000_diff1 value: 12.464843106371594 - type: nauc_map_at_1000_max value: 20.787030702897695 - type: nauc_map_at_1000_std value: 28.95839241630686 - type: nauc_map_at_100_diff1 value: 12.056329590233632 - type: nauc_map_at_100_max value: 19.582266707899254 - type: nauc_map_at_100_std value: 25.720291368581556 - type: nauc_map_at_10_diff1 value: 11.947408635481318 - type: nauc_map_at_10_max value: 12.217216974254558 - type: nauc_map_at_10_std value: 11.576137158486222 - type: nauc_map_at_1_diff1 value: 21.07052969340483 - type: nauc_map_at_1_max value: 9.194196653066513 - type: nauc_map_at_1_std value: 10.422057533092019 - type: nauc_map_at_20_diff1 value: 12.996950185313217 - type: nauc_map_at_20_max value: 14.877459115978706 - type: nauc_map_at_20_std value: 16.078479194353804 - type: nauc_map_at_3_diff1 value: 12.713931226731026 - type: nauc_map_at_3_max value: 10.534051914774205 - type: nauc_map_at_3_std value: 6.634455286829892 - type: nauc_map_at_5_diff1 value: 13.49610237252039 - type: nauc_map_at_5_max value: 11.395460371209825 - type: nauc_map_at_5_std value: 8.556070768754035 - type: nauc_mrr_at_1000_diff1 value: 23.440732029069466 - type: nauc_mrr_at_1000_max value: 28.227169599675545 - type: nauc_mrr_at_1000_std value: 24.271326293306412 - type: nauc_mrr_at_100_diff1 value: 23.431318332471474 - type: nauc_mrr_at_100_max value: 28.247320676020777 - type: nauc_mrr_at_100_std value: 24.289544335994325 - type: nauc_mrr_at_10_diff1 value: 23.10244787887524 - type: nauc_mrr_at_10_max value: 28.230713760094805 - type: nauc_mrr_at_10_std value: 23.872224687475942 - type: nauc_mrr_at_1_diff1 value: 27.28025238438753 - type: nauc_mrr_at_1_max value: 29.836674855640243 - type: nauc_mrr_at_1_std value: 25.025348142188943 - type: nauc_mrr_at_20_diff1 value: 23.359567556301606 - type: nauc_mrr_at_20_max value: 28.045194655704407 - type: nauc_mrr_at_20_std value: 24.13890939061388 - type: nauc_mrr_at_3_diff1 value: 23.223682067100583 - type: nauc_mrr_at_3_max value: 26.838082016739516 - type: nauc_mrr_at_3_std value: 22.74149701561025 - type: nauc_mrr_at_5_diff1 value: 23.254953330680365 - type: nauc_mrr_at_5_max value: 27.731371603773923 - type: nauc_mrr_at_5_std value: 23.673666153182165 - type: nauc_ndcg_at_1000_diff1 value: 16.257303689752668 - type: nauc_ndcg_at_1000_max value: 20.372685600143058 - type: nauc_ndcg_at_1000_std value: 43.5647262197375 - type: nauc_ndcg_at_100_diff1 value: 13.712668770381223 - type: nauc_ndcg_at_100_max value: 17.3070502066831 - type: nauc_ndcg_at_100_std value: 34.01332703454124 - type: nauc_ndcg_at_10_diff1 value: 15.272864554548784 - type: nauc_ndcg_at_10_max value: 17.386211785825974 - type: nauc_ndcg_at_10_std value: 25.093090359467173 - type: nauc_ndcg_at_1_diff1 value: 26.811305606655417 - type: nauc_ndcg_at_1_max value: 21.81236974804081 - type: nauc_ndcg_at_1_std value: 21.876218231165208 - type: nauc_ndcg_at_20_diff1 value: 15.570243759415145 - type: nauc_ndcg_at_20_max value: 15.48792448315102 - type: nauc_ndcg_at_20_std value: 24.906899062098667 - type: nauc_ndcg_at_3_diff1 value: 16.562964238706122 - type: nauc_ndcg_at_3_max value: 19.01543958115029 - type: nauc_ndcg_at_3_std value: 22.48353735036461 - type: nauc_ndcg_at_5_diff1 value: 16.232340125010094 - type: nauc_ndcg_at_5_max value: 18.05687758131152 - type: nauc_ndcg_at_5_std value: 22.85229110345859 - type: nauc_precision_at_1000_diff1 value: 11.56385665060498 - type: nauc_precision_at_1000_max value: 20.681035939178482 - type: nauc_precision_at_1000_std value: 36.897327543333354 - type: nauc_precision_at_100_diff1 value: 11.514032623059778 - type: nauc_precision_at_100_max value: 29.047762650445875 - type: nauc_precision_at_100_std value: 47.298484079525174 - type: nauc_precision_at_10_diff1 value: 9.30196384643561 - type: nauc_precision_at_10_max value: 26.02930642801758 - type: nauc_precision_at_10_std value: 33.683648923271505 - type: nauc_precision_at_1_diff1 value: 27.28025238438753 - type: nauc_precision_at_1_max value: 29.836674855640243 - type: nauc_precision_at_1_std value: 25.025348142188943 - type: nauc_precision_at_20_diff1 value: 12.53572220614082 - type: nauc_precision_at_20_max value: 27.436119324419035 - type: nauc_precision_at_20_std value: 37.4124720701224 - type: nauc_precision_at_3_diff1 value: 11.473474612430659 - type: nauc_precision_at_3_max value: 25.108171747341117 - type: nauc_precision_at_3_std value: 22.218903585707725 - type: nauc_precision_at_5_diff1 value: 11.651584386463366 - type: nauc_precision_at_5_max value: 26.45472985167932 - type: nauc_precision_at_5_std value: 25.45046633350586 - type: nauc_recall_at_1000_diff1 value: 8.952304094844058 - type: nauc_recall_at_1000_max value: 6.398413185072366 - type: nauc_recall_at_1000_std value: 43.77431410498004 - type: nauc_recall_at_100_diff1 value: 2.4342418404967687 - type: nauc_recall_at_100_max value: 7.263012696368243 - type: nauc_recall_at_100_std value: 29.36126458392181 - type: nauc_recall_at_10_diff1 value: 2.7077112127598997 - type: nauc_recall_at_10_max value: 2.7599172986852833 - type: nauc_recall_at_10_std value: 2.533785276895851 - type: nauc_recall_at_1_diff1 value: 21.07052969340483 - type: nauc_recall_at_1_max value: 9.194196653066513 - type: nauc_recall_at_1_std value: 10.422057533092019 - type: nauc_recall_at_20_diff1 value: 3.6472612051309605 - type: nauc_recall_at_20_max value: 1.8491755772071496 - type: nauc_recall_at_20_std value: 7.2724409200148274 - type: nauc_recall_at_3_diff1 value: 6.007910279710785 - type: nauc_recall_at_3_max value: 4.734271875365448 - type: nauc_recall_at_3_std value: 0.08424826705888651 - type: nauc_recall_at_5_diff1 value: 6.405796890104426 - type: nauc_recall_at_5_max value: 5.069916025405803 - type: nauc_recall_at_5_std value: 0.7763463942604057 - type: ndcg_at_1 value: 22.875 - type: ndcg_at_10 value: 17.809 - type: ndcg_at_100 value: 20.913 - type: ndcg_at_1000 value: 26.843 - type: ndcg_at_20 value: 17.688000000000002 - type: ndcg_at_3 value: 19.901 - type: ndcg_at_5 value: 18.587 - type: precision_at_1 value: 33.25 - type: precision_at_10 value: 16.025 - type: precision_at_100 value: 5.265000000000001 - type: precision_at_1000 value: 1.097 - type: precision_at_20 value: 12.188 - type: precision_at_3 value: 25.0 - type: precision_at_5 value: 20.65 - type: recall_at_1 value: 2.9080000000000004 - type: recall_at_10 value: 11.067 - type: recall_at_100 value: 26.874 - type: recall_at_1000 value: 47.693999999999996 - type: recall_at_20 value: 15.251999999999999 - type: recall_at_3 value: 6.065 - type: recall_at_5 value: 7.84 - task: type: Classification dataset: name: MTEB EmotionClassification (default) type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 36.714999999999996 - type: f1 value: 33.535803051550175 - type: f1_weighted value: 38.73741738231718 - type: main_score value: 36.714999999999996 - task: type: Retrieval dataset: name: MTEB FEVER (default) type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: main_score value: 21.749 - type: map_at_1 value: 11.853 - type: map_at_10 value: 17.788999999999998 - type: map_at_100 value: 18.695 - type: map_at_1000 value: 18.783 - type: map_at_20 value: 18.279999999999998 - type: map_at_3 value: 15.488 - type: map_at_5 value: 16.766000000000002 - type: mrr_at_1 value: 12.57125712571257 - type: mrr_at_10 value: 18.809821458336327 - type: mrr_at_100 value: 19.746247724300634 - type: mrr_at_1000 value: 19.828660283641725 - type: mrr_at_20 value: 19.325603053511834 - type: mrr_at_3 value: 16.394139413941424 - type: mrr_at_5 value: 17.745774577457816 - type: nauc_map_at_1000_diff1 value: 20.42216628213536 - type: nauc_map_at_1000_max value: 10.981655836421126 - type: nauc_map_at_1000_std value: -11.06254344432782 - type: nauc_map_at_100_diff1 value: 20.430402218559234 - type: nauc_map_at_100_max value: 10.946143961065747 - type: nauc_map_at_100_std value: -11.067509219026796 - type: nauc_map_at_10_diff1 value: 20.633613948259416 - type: nauc_map_at_10_max value: 10.749227715844583 - type: nauc_map_at_10_std value: -11.683369497410549 - type: nauc_map_at_1_diff1 value: 25.93334856369996 - type: nauc_map_at_1_max value: 11.756956805295456 - type: nauc_map_at_1_std value: -15.812253616827613 - type: nauc_map_at_20_diff1 value: 20.53707678990591 - type: nauc_map_at_20_max value: 10.852465838841702 - type: nauc_map_at_20_std value: -11.300317053293336 - type: nauc_map_at_3_diff1 value: 21.197417138364173 - type: nauc_map_at_3_max value: 10.400364426417779 - type: nauc_map_at_3_std value: -13.649848120655465 - type: nauc_map_at_5_diff1 value: 20.84809728992014 - type: nauc_map_at_5_max value: 10.503569044791474 - type: nauc_map_at_5_std value: -12.308858242572567 - type: nauc_mrr_at_1000_diff1 value: 20.256963399952387 - type: nauc_mrr_at_1000_max value: 11.527178442395032 - type: nauc_mrr_at_1000_std value: -11.30536908201306 - type: nauc_mrr_at_100_diff1 value: 20.25440064656351 - type: nauc_mrr_at_100_max value: 11.501764619959824 - type: nauc_mrr_at_100_std value: -11.2998442261201 - type: nauc_mrr_at_10_diff1 value: 20.43696908799925 - type: nauc_mrr_at_10_max value: 11.301632140198784 - type: nauc_mrr_at_10_std value: -11.862198378461013 - type: nauc_mrr_at_1_diff1 value: 25.788068994261927 - type: nauc_mrr_at_1_max value: 12.494106068654443 - type: nauc_mrr_at_1_std value: -16.072022142157422 - type: nauc_mrr_at_20_diff1 value: 20.360762859316843 - type: nauc_mrr_at_20_max value: 11.39368067763063 - type: nauc_mrr_at_20_std value: -11.492483206429506 - type: nauc_mrr_at_3_diff1 value: 21.005337906582582 - type: nauc_mrr_at_3_max value: 11.007636661630489 - type: nauc_mrr_at_3_std value: -13.968861333278157 - type: nauc_mrr_at_5_diff1 value: 20.645981078269408 - type: nauc_mrr_at_5_max value: 11.098139454539123 - type: nauc_mrr_at_5_std value: -12.49821888423247 - type: nauc_ndcg_at_1000_diff1 value: 17.961862840683438 - type: nauc_ndcg_at_1000_max value: 12.633382278961424 - type: nauc_ndcg_at_1000_std value: -6.623628781829191 - type: nauc_ndcg_at_100_diff1 value: 17.947555297079322 - type: nauc_ndcg_at_100_max value: 11.952176273790133 - type: nauc_ndcg_at_100_std value: -6.732908920357083 - type: nauc_ndcg_at_10_diff1 value: 18.88944240845781 - type: nauc_ndcg_at_10_max value: 10.931301252399257 - type: nauc_ndcg_at_10_std value: -9.501435512141649 - type: nauc_ndcg_at_1_diff1 value: 25.788068994261927 - type: nauc_ndcg_at_1_max value: 12.494106068654443 - type: nauc_ndcg_at_1_std value: -16.072022142157422 - type: nauc_ndcg_at_20_diff1 value: 18.596170230193344 - type: nauc_ndcg_at_20_max value: 11.240653699992258 - type: nauc_ndcg_at_20_std value: -8.248089644433646 - type: nauc_ndcg_at_3_diff1 value: 19.899071290487075 - type: nauc_ndcg_at_3_max value: 10.217579017596986 - type: nauc_ndcg_at_3_std value: -13.092631082234583 - type: nauc_ndcg_at_5_diff1 value: 19.36942104398564 - type: nauc_ndcg_at_5_max value: 10.43000193675244 - type: nauc_ndcg_at_5_std value: -10.83023984824733 - type: nauc_precision_at_1000_diff1 value: 3.524222591189092 - type: nauc_precision_at_1000_max value: 21.268005942647154 - type: nauc_precision_at_1000_std value: 15.036228494768125 - type: nauc_precision_at_100_diff1 value: 9.81714899740422 - type: nauc_precision_at_100_max value: 16.79030493724481 - type: nauc_precision_at_100_std value: 8.132992070925313 - type: nauc_precision_at_10_diff1 value: 15.127575113065081 - type: nauc_precision_at_10_max value: 11.83424711782065 - type: nauc_precision_at_10_std value: -4.12398539713339 - type: nauc_precision_at_1_diff1 value: 25.788068994261927 - type: nauc_precision_at_1_max value: 12.494106068654443 - type: nauc_precision_at_1_std value: -16.072022142157422 - type: nauc_precision_at_20_diff1 value: 13.988365041285991 - type: nauc_precision_at_20_max value: 12.982343769260144 - type: nauc_precision_at_20_std value: 0.12831196857307875 - type: nauc_precision_at_3_diff1 value: 16.98591248173311 - type: nauc_precision_at_3_max value: 10.076477872033717 - type: nauc_precision_at_3_std value: -11.763027829441572 - type: nauc_precision_at_5_diff1 value: 16.109103361887634 - type: nauc_precision_at_5_max value: 10.743629779747735 - type: nauc_precision_at_5_std value: -7.223871485711275 - type: nauc_recall_at_1000_diff1 value: 7.300447723499678 - type: nauc_recall_at_1000_max value: 21.050009113075134 - type: nauc_recall_at_1000_std value: 14.78834446079826 - type: nauc_recall_at_100_diff1 value: 10.585202094510606 - type: nauc_recall_at_100_max value: 14.3397259367012 - type: nauc_recall_at_100_std value: 6.774673938241939 - type: nauc_recall_at_10_diff1 value: 14.740253776747794 - type: nauc_recall_at_10_max value: 10.775882310785141 - type: nauc_recall_at_10_std value: -4.212933280572477 - type: nauc_recall_at_1_diff1 value: 25.93334856369996 - type: nauc_recall_at_1_max value: 11.756956805295456 - type: nauc_recall_at_1_std value: -15.812253616827613 - type: nauc_recall_at_20_diff1 value: 13.917159385985588 - type: nauc_recall_at_20_max value: 11.562519738362539 - type: nauc_recall_at_20_std value: -0.6257023100650639 - type: nauc_recall_at_3_diff1 value: 16.79817894741575 - type: nauc_recall_at_3_max value: 9.28528047744461 - type: nauc_recall_at_3_std value: -11.417062993569289 - type: nauc_recall_at_5_diff1 value: 15.946724754389002 - type: nauc_recall_at_5_max value: 9.701570943463285 - type: nauc_recall_at_5_std value: -7.10641716237399 - type: ndcg_at_1 value: 12.570999999999998 - type: ndcg_at_10 value: 21.749 - type: ndcg_at_100 value: 26.627000000000002 - type: ndcg_at_1000 value: 29.211 - type: ndcg_at_20 value: 23.546 - type: ndcg_at_3 value: 16.938 - type: ndcg_at_5 value: 19.259 - type: precision_at_1 value: 12.570999999999998 - type: precision_at_10 value: 3.5970000000000004 - type: precision_at_100 value: 0.621 - type: precision_at_1000 value: 0.086 - type: precision_at_20 value: 2.183 - type: precision_at_3 value: 7.2059999999999995 - type: precision_at_5 value: 5.536 - type: recall_at_1 value: 11.853 - type: recall_at_10 value: 33.376 - type: recall_at_100 value: 56.714 - type: recall_at_1000 value: 77.03 - type: recall_at_20 value: 40.327 - type: recall_at_3 value: 20.26 - type: recall_at_5 value: 25.816 - task: type: Retrieval dataset: name: MTEB FiQA2018 (default) type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: main_score value: 9.92 - type: map_at_1 value: 4.127 - type: map_at_10 value: 6.8580000000000005 - type: map_at_100 value: 7.678 - type: map_at_1000 value: 7.8469999999999995 - type: map_at_20 value: 7.2459999999999996 - type: map_at_3 value: 5.695 - type: map_at_5 value: 6.321000000000001 - type: mrr_at_1 value: 8.487654320987655 - type: mrr_at_10 value: 13.07460072506369 - type: mrr_at_100 value: 13.994745653960623 - type: mrr_at_1000 value: 14.107792823690083 - type: mrr_at_20 value: 13.534501788196179 - type: mrr_at_3 value: 11.445473251028803 - type: mrr_at_5 value: 12.3559670781893 - type: nauc_map_at_1000_diff1 value: 32.0284999038968 - type: nauc_map_at_1000_max value: 1.5433417591774994 - type: nauc_map_at_1000_std value: -0.7522549236643168 - type: nauc_map_at_100_diff1 value: 32.10293650409455 - type: nauc_map_at_100_max value: 1.331765813078503 - type: nauc_map_at_100_std value: -0.9813834028863421 - type: nauc_map_at_10_diff1 value: 32.996281892439825 - type: nauc_map_at_10_max value: 0.9000809223325343 - type: nauc_map_at_10_std value: -1.1346437895544166 - type: nauc_map_at_1_diff1 value: 40.86108038715362 - type: nauc_map_at_1_max value: 2.3646340186850976 - type: nauc_map_at_1_std value: -1.1273734737305066 - type: nauc_map_at_20_diff1 value: 32.74666906672611 - type: nauc_map_at_20_max value: 1.2905542892955657 - type: nauc_map_at_20_std value: -0.9339025080151999 - type: nauc_map_at_3_diff1 value: 35.22245724674683 - type: nauc_map_at_3_max value: 0.7682718438437706 - type: nauc_map_at_3_std value: 0.12863043400502505 - type: nauc_map_at_5_diff1 value: 33.82974605887253 - type: nauc_map_at_5_max value: 1.9127548750254273 - type: nauc_map_at_5_std value: -1.0892042440032836 - type: nauc_mrr_at_1000_diff1 value: 26.492008408086686 - type: nauc_mrr_at_1000_max value: 5.1988605475320995 - type: nauc_mrr_at_1000_std value: -5.000717562564267 - type: nauc_mrr_at_100_diff1 value: 26.43042358484738 - type: nauc_mrr_at_100_max value: 5.105015607758134 - type: nauc_mrr_at_100_std value: -5.087762897442909 - type: nauc_mrr_at_10_diff1 value: 26.788604447191133 - type: nauc_mrr_at_10_max value: 4.7186331678651845 - type: nauc_mrr_at_10_std value: -5.004992425060064 - type: nauc_mrr_at_1_diff1 value: 32.279840763275516 - type: nauc_mrr_at_1_max value: 2.24128577826757 - type: nauc_mrr_at_1_std value: -7.11209805130024 - type: nauc_mrr_at_20_diff1 value: 26.648740524800157 - type: nauc_mrr_at_20_max value: 5.032938733920583 - type: nauc_mrr_at_20_std value: -4.909302508802945 - type: nauc_mrr_at_3_diff1 value: 29.41800019774434 - type: nauc_mrr_at_3_max value: 4.4590853953847525 - type: nauc_mrr_at_3_std value: -4.3297909365345735 - type: nauc_mrr_at_5_diff1 value: 27.962472533762323 - type: nauc_mrr_at_5_max value: 5.263438068962538 - type: nauc_mrr_at_5_std value: -4.758962874067143 - type: nauc_ndcg_at_1000_diff1 value: 24.911203582060345 - type: nauc_ndcg_at_1000_max value: 4.8332507815090455 - type: nauc_ndcg_at_1000_std value: 1.6141523218130944 - type: nauc_ndcg_at_100_diff1 value: 24.983661152779078 - type: nauc_ndcg_at_100_max value: 2.304457345177104 - type: nauc_ndcg_at_100_std value: -1.5897525359169224 - type: nauc_ndcg_at_10_diff1 value: 28.26656252033789 - type: nauc_ndcg_at_10_max value: 1.7020081362468151 - type: nauc_ndcg_at_10_std value: -1.8666662654279278 - type: nauc_ndcg_at_1_diff1 value: 32.279840763275516 - type: nauc_ndcg_at_1_max value: 2.24128577826757 - type: nauc_ndcg_at_1_std value: -7.11209805130024 - type: nauc_ndcg_at_20_diff1 value: 27.465206920750536 - type: nauc_ndcg_at_20_max value: 2.5953555722799453 - type: nauc_ndcg_at_20_std value: -1.5728415410381176 - type: nauc_ndcg_at_3_diff1 value: 30.920667289434967 - type: nauc_ndcg_at_3_max value: 3.0636991383196537 - type: nauc_ndcg_at_3_std value: -1.9109940966007124 - type: nauc_ndcg_at_5_diff1 value: 29.92826036454942 - type: nauc_ndcg_at_5_max value: 4.131081055128095 - type: nauc_ndcg_at_5_std value: -2.3878918992446225 - type: nauc_precision_at_1000_diff1 value: 3.260322987641696 - type: nauc_precision_at_1000_max value: 17.68897292294318 - type: nauc_precision_at_1000_std value: -2.3731970963497435 - type: nauc_precision_at_100_diff1 value: 9.563869576672285 - type: nauc_precision_at_100_max value: 8.334908942965033 - type: nauc_precision_at_100_std value: -5.8502185819543095 - type: nauc_precision_at_10_diff1 value: 19.4489082625378 - type: nauc_precision_at_10_max value: 3.283292230263419 - type: nauc_precision_at_10_std value: -3.474955077429711 - type: nauc_precision_at_1_diff1 value: 32.279840763275516 - type: nauc_precision_at_1_max value: 2.24128577826757 - type: nauc_precision_at_1_std value: -7.11209805130024 - type: nauc_precision_at_20_diff1 value: 16.689938201739743 - type: nauc_precision_at_20_max value: 6.725444203867719 - type: nauc_precision_at_20_std value: -4.064726266450374 - type: nauc_precision_at_3_diff1 value: 25.13225837931828 - type: nauc_precision_at_3_max value: 4.838860499225599 - type: nauc_precision_at_3_std value: -3.958929737721354 - type: nauc_precision_at_5_diff1 value: 24.021979813061318 - type: nauc_precision_at_5_max value: 7.890864147142139 - type: nauc_precision_at_5_std value: -5.108473581125845 - type: nauc_recall_at_1000_diff1 value: 11.754438596675685 - type: nauc_recall_at_1000_max value: 2.6490978066853614 - type: nauc_recall_at_1000_std value: 16.01878535704267 - type: nauc_recall_at_100_diff1 value: 13.38637240649497 - type: nauc_recall_at_100_max value: -1.221302040775315 - type: nauc_recall_at_100_std value: 1.157256497357066 - type: nauc_recall_at_10_diff1 value: 21.794818475196234 - type: nauc_recall_at_10_max value: -0.3633267676365134 - type: nauc_recall_at_10_std value: -0.895901919914364 - type: nauc_recall_at_1_diff1 value: 40.86108038715362 - type: nauc_recall_at_1_max value: 2.3646340186850976 - type: nauc_recall_at_1_std value: -1.1273734737305066 - type: nauc_recall_at_20_diff1 value: 19.87681298491174 - type: nauc_recall_at_20_max value: 1.6730017285596162 - type: nauc_recall_at_20_std value: -0.20426631986163188 - type: nauc_recall_at_3_diff1 value: 30.874288136679436 - type: nauc_recall_at_3_max value: -0.3136634079590933 - type: nauc_recall_at_3_std value: 2.5177179498883 - type: nauc_recall_at_5_diff1 value: 25.256571251371817 - type: nauc_recall_at_5_max value: 3.682723691316816 - type: nauc_recall_at_5_std value: -0.5339704756198042 - type: ndcg_at_1 value: 8.488 - type: ndcg_at_10 value: 9.92 - type: ndcg_at_100 value: 14.548 - type: ndcg_at_1000 value: 18.9 - type: ndcg_at_20 value: 11.359 - type: ndcg_at_3 value: 8.024000000000001 - type: ndcg_at_5 value: 8.792 - type: precision_at_1 value: 8.488 - type: precision_at_10 value: 2.932 - type: precision_at_100 value: 0.748 - type: precision_at_1000 value: 0.148 - type: precision_at_20 value: 2.0140000000000002 - type: precision_at_3 value: 5.556 - type: precision_at_5 value: 4.321 - type: recall_at_1 value: 4.127 - type: recall_at_10 value: 13.094 - type: recall_at_100 value: 31.837 - type: recall_at_1000 value: 59.553 - type: recall_at_20 value: 17.827 - type: recall_at_3 value: 7.384 - type: recall_at_5 value: 9.896 - task: type: Retrieval dataset: name: MTEB HotpotQA (default) type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: main_score value: 28.275 - type: map_at_1 value: 16.111 - type: map_at_10 value: 22.017 - type: map_at_100 value: 22.756999999999998 - type: map_at_1000 value: 22.847 - type: map_at_20 value: 22.422 - type: map_at_3 value: 20.358 - type: map_at_5 value: 21.333 - type: mrr_at_1 value: 32.22147197839298 - type: mrr_at_10 value: 38.66461421390523 - type: mrr_at_100 value: 39.322386407471846 - type: mrr_at_1000 value: 39.38317578333015 - type: mrr_at_20 value: 39.03936064723844 - type: mrr_at_3 value: 37.0380373621427 - type: mrr_at_5 value: 37.98739590366868 - type: nauc_map_at_1000_diff1 value: 52.641429993405374 - type: nauc_map_at_1000_max value: 13.846349541182768 - type: nauc_map_at_1000_std value: 21.234286433255207 - type: nauc_map_at_100_diff1 value: 52.657002815638506 - type: nauc_map_at_100_max value: 13.85017253047762 - type: nauc_map_at_100_std value: 21.152031928089446 - type: nauc_map_at_10_diff1 value: 52.99229334884495 - type: nauc_map_at_10_max value: 14.018498788641875 - type: nauc_map_at_10_std value: 20.280967836300796 - type: nauc_map_at_1_diff1 value: 62.48492577674589 - type: nauc_map_at_1_max value: 17.17952567126258 - type: nauc_map_at_1_std value: 14.948761034885164 - type: nauc_map_at_20_diff1 value: 52.79863501218778 - type: nauc_map_at_20_max value: 13.948043219195666 - type: nauc_map_at_20_std value: 20.7595845629364 - type: nauc_map_at_3_diff1 value: 54.858284695883874 - type: nauc_map_at_3_max value: 15.306243909685097 - type: nauc_map_at_3_std value: 18.364146093661798 - type: nauc_map_at_5_diff1 value: 53.64685588504633 - type: nauc_map_at_5_max value: 14.539476850625293 - type: nauc_map_at_5_std value: 19.26181960117483 - type: nauc_mrr_at_1000_diff1 value: 57.57231957804255 - type: nauc_mrr_at_1000_max value: 15.03366896314471 - type: nauc_mrr_at_1000_std value: 18.433684270599176 - type: nauc_mrr_at_100_diff1 value: 57.56183438457194 - type: nauc_mrr_at_100_max value: 15.03096028096824 - type: nauc_mrr_at_100_std value: 18.429416889726777 - type: nauc_mrr_at_10_diff1 value: 57.67734377743546 - type: nauc_mrr_at_10_max value: 15.16017920205799 - type: nauc_mrr_at_10_std value: 18.12061393467236 - type: nauc_mrr_at_1_diff1 value: 62.48492577674589 - type: nauc_mrr_at_1_max value: 17.17952567126258 - type: nauc_mrr_at_1_std value: 14.948761034885164 - type: nauc_mrr_at_20_diff1 value: 57.60348567562974 - type: nauc_mrr_at_20_max value: 15.076107860913815 - type: nauc_mrr_at_20_std value: 18.315578904649655 - type: nauc_mrr_at_3_diff1 value: 58.506133301922404 - type: nauc_mrr_at_3_max value: 15.915584728445186 - type: nauc_mrr_at_3_std value: 17.04808056180522 - type: nauc_mrr_at_5_diff1 value: 57.864519138851 - type: nauc_mrr_at_5_max value: 15.432048897499834 - type: nauc_mrr_at_5_std value: 17.501503102699093 - type: nauc_ndcg_at_1000_diff1 value: 50.81874302391767 - type: nauc_ndcg_at_1000_max value: 12.126965970827337 - type: nauc_ndcg_at_1000_std value: 26.109477652734558 - type: nauc_ndcg_at_100_diff1 value: 50.95009805524029 - type: nauc_ndcg_at_100_max value: 12.295872662993116 - type: nauc_ndcg_at_100_std value: 24.807604340476804 - type: nauc_ndcg_at_10_diff1 value: 52.20877593945092 - type: nauc_ndcg_at_10_max value: 13.097936478311336 - type: nauc_ndcg_at_10_std value: 21.647729284253273 - type: nauc_ndcg_at_1_diff1 value: 62.48492577674589 - type: nauc_ndcg_at_1_max value: 17.17952567126258 - type: nauc_ndcg_at_1_std value: 14.948761034885164 - type: nauc_ndcg_at_20_diff1 value: 51.660197131546795 - type: nauc_ndcg_at_20_max value: 12.806424408705414 - type: nauc_ndcg_at_20_std value: 22.845498945756106 - type: nauc_ndcg_at_3_diff1 value: 54.93829038994602 - type: nauc_ndcg_at_3_max value: 15.126023161114087 - type: nauc_ndcg_at_3_std value: 18.550528733148234 - type: nauc_ndcg_at_5_diff1 value: 53.22828386576709 - type: nauc_ndcg_at_5_max value: 14.010347066037058 - type: nauc_ndcg_at_5_std value: 19.741810905430523 - type: nauc_precision_at_1000_diff1 value: 25.685915909789987 - type: nauc_precision_at_1000_max value: 1.8017828825425253 - type: nauc_precision_at_1000_std value: 41.162880151457074 - type: nauc_precision_at_100_diff1 value: 32.092241320736 - type: nauc_precision_at_100_max value: 4.604946834474919 - type: nauc_precision_at_100_std value: 34.4563884520215 - type: nauc_precision_at_10_diff1 value: 41.65435929038311 - type: nauc_precision_at_10_max value: 8.565743855294501 - type: nauc_precision_at_10_std value: 26.21588053936351 - type: nauc_precision_at_1_diff1 value: 62.48492577674589 - type: nauc_precision_at_1_max value: 17.17952567126258 - type: nauc_precision_at_1_std value: 14.948761034885164 - type: nauc_precision_at_20_diff1 value: 38.94463410875179 - type: nauc_precision_at_20_max value: 7.463676781280664 - type: nauc_precision_at_20_std value: 29.137351869373944 - type: nauc_precision_at_3_diff1 value: 50.167835645184425 - type: nauc_precision_at_3_max value: 13.751023116677993 - type: nauc_precision_at_3_std value: 20.36965523817541 - type: nauc_precision_at_5_diff1 value: 45.636896593629885 - type: nauc_precision_at_5_max value: 11.146676622303696 - type: nauc_precision_at_5_std value: 22.338180446057095 - type: nauc_recall_at_1000_diff1 value: 25.68591590979012 - type: nauc_recall_at_1000_max value: 1.801782882542605 - type: nauc_recall_at_1000_std value: 41.162880151457124 - type: nauc_recall_at_100_diff1 value: 32.09224132073595 - type: nauc_recall_at_100_max value: 4.604946834474883 - type: nauc_recall_at_100_std value: 34.45638845202142 - type: nauc_recall_at_10_diff1 value: 41.65435929038313 - type: nauc_recall_at_10_max value: 8.56574385529452 - type: nauc_recall_at_10_std value: 26.215880539363507 - type: nauc_recall_at_1_diff1 value: 62.48492577674589 - type: nauc_recall_at_1_max value: 17.17952567126258 - type: nauc_recall_at_1_std value: 14.948761034885164 - type: nauc_recall_at_20_diff1 value: 38.94463410875175 - type: nauc_recall_at_20_max value: 7.463676781280684 - type: nauc_recall_at_20_std value: 29.13735186937395 - type: nauc_recall_at_3_diff1 value: 50.1678356451844 - type: nauc_recall_at_3_max value: 13.751023116677974 - type: nauc_recall_at_3_std value: 20.369655238175365 - type: nauc_recall_at_5_diff1 value: 45.63689659362988 - type: nauc_recall_at_5_max value: 11.146676622303726 - type: nauc_recall_at_5_std value: 22.33818044605712 - type: ndcg_at_1 value: 32.221 - type: ndcg_at_10 value: 28.275 - type: ndcg_at_100 value: 31.785000000000004 - type: ndcg_at_1000 value: 34.103 - type: ndcg_at_20 value: 29.593000000000004 - type: ndcg_at_3 value: 25.151 - type: ndcg_at_5 value: 26.752 - type: precision_at_1 value: 32.221 - type: precision_at_10 value: 6.1240000000000006 - type: precision_at_100 value: 0.893 - type: precision_at_1000 value: 0.12 - type: precision_at_20 value: 3.486 - type: precision_at_3 value: 15.737000000000002 - type: precision_at_5 value: 10.709 - type: recall_at_1 value: 16.111 - type: recall_at_10 value: 30.621 - type: recall_at_100 value: 44.625 - type: recall_at_1000 value: 60.141999999999996 - type: recall_at_20 value: 34.862 - type: recall_at_3 value: 23.605999999999998 - type: recall_at_5 value: 26.772000000000002 - task: type: Classification dataset: name: MTEB ImdbClassification (default) type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 64.7572 - type: ap value: 59.839874895528524 - type: ap_weighted value: 59.839874895528524 - type: f1 value: 64.20337541365726 - type: f1_weighted value: 64.20337541365727 - type: main_score value: 64.7572 - task: type: Retrieval dataset: name: MTEB MSMARCO (default) type: mteb/msmarco config: default split: test revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: main_score value: 24.82 - type: map_at_1 value: 0.735 - type: map_at_10 value: 3.9170000000000003 - type: map_at_100 value: 9.378 - type: map_at_1000 value: 11.623999999999999 - type: map_at_20 value: 5.618 - type: map_at_3 value: 1.7919999999999998 - type: map_at_5 value: 2.336 - type: mrr_at_1 value: 44.18604651162791 - type: mrr_at_10 value: 54.19896640826873 - type: mrr_at_100 value: 55.26573324528463 - type: mrr_at_1000 value: 55.27198566559285 - type: mrr_at_20 value: 54.99928247327269 - type: mrr_at_3 value: 53.100775193798455 - type: mrr_at_5 value: 53.100775193798455 - type: nauc_map_at_1000_diff1 value: 31.628273118548705 - type: nauc_map_at_1000_max value: 50.91371464691997 - type: nauc_map_at_1000_std value: 62.1629306739638 - type: nauc_map_at_100_diff1 value: 34.06331996720226 - type: nauc_map_at_100_max value: 48.04779853755765 - type: nauc_map_at_100_std value: 56.2169602632146 - type: nauc_map_at_10_diff1 value: 24.82479697995596 - type: nauc_map_at_10_max value: 17.260532120473027 - type: nauc_map_at_10_std value: 34.40317364487004 - type: nauc_map_at_1_diff1 value: 18.417203305727828 - type: nauc_map_at_1_max value: 10.348745827965553 - type: nauc_map_at_1_std value: 13.800647316428785 - type: nauc_map_at_20_diff1 value: 28.607666223966184 - type: nauc_map_at_20_max value: 26.857097064842744 - type: nauc_map_at_20_std value: 44.07803604009219 - type: nauc_map_at_3_diff1 value: 14.047344730269346 - type: nauc_map_at_3_max value: 4.963953509469209 - type: nauc_map_at_3_std value: 23.557463504489785 - type: nauc_map_at_5_diff1 value: 21.509241434242192 - type: nauc_map_at_5_max value: 12.46882534029133 - type: nauc_map_at_5_std value: 32.227877810916375 - type: nauc_mrr_at_1000_diff1 value: 31.63313657810774 - type: nauc_mrr_at_1000_max value: 55.49699813296376 - type: nauc_mrr_at_1000_std value: 49.41026226392305 - type: nauc_mrr_at_100_diff1 value: 31.6361977657553 - type: nauc_mrr_at_100_max value: 55.504705533419596 - type: nauc_mrr_at_100_std value: 49.40562252181147 - type: nauc_mrr_at_10_diff1 value: 30.70281063739253 - type: nauc_mrr_at_10_max value: 55.03100675112251 - type: nauc_mrr_at_10_std value: 50.24358852371792 - type: nauc_mrr_at_1_diff1 value: 26.866938946939406 - type: nauc_mrr_at_1_max value: 53.65404374099094 - type: nauc_mrr_at_1_std value: 37.860934759045406 - type: nauc_mrr_at_20_diff1 value: 31.999742146159587 - type: nauc_mrr_at_20_max value: 55.37549959049349 - type: nauc_mrr_at_20_std value: 49.84014367474812 - type: nauc_mrr_at_3_diff1 value: 32.72165006933737 - type: nauc_mrr_at_3_max value: 54.57910637425508 - type: nauc_mrr_at_3_std value: 50.88385330631171 - type: nauc_mrr_at_5_diff1 value: 32.72165006933737 - type: nauc_mrr_at_5_max value: 54.57910637425508 - type: nauc_mrr_at_5_std value: 50.88385330631171 - type: nauc_ndcg_at_1000_diff1 value: 38.246667176580495 - type: nauc_ndcg_at_1000_max value: 49.41074648270727 - type: nauc_ndcg_at_1000_std value: 58.77522494287387 - type: nauc_ndcg_at_100_diff1 value: 39.08660687104264 - type: nauc_ndcg_at_100_max value: 51.17365801344417 - type: nauc_ndcg_at_100_std value: 50.96489743248102 - type: nauc_ndcg_at_10_diff1 value: 35.52797859138293 - type: nauc_ndcg_at_10_max value: 47.13047918089127 - type: nauc_ndcg_at_10_std value: 47.525674912522156 - type: nauc_ndcg_at_1_diff1 value: 20.578863285213718 - type: nauc_ndcg_at_1_max value: 33.573506875453205 - type: nauc_ndcg_at_1_std value: 11.414153977938234 - type: nauc_ndcg_at_20_diff1 value: 36.05409218821747 - type: nauc_ndcg_at_20_max value: 51.40798496195552 - type: nauc_ndcg_at_20_std value: 50.81256309557642 - type: nauc_ndcg_at_3_diff1 value: 30.26224700714665 - type: nauc_ndcg_at_3_max value: 38.639459899469855 - type: nauc_ndcg_at_3_std value: 36.35415154738677 - type: nauc_ndcg_at_5_diff1 value: 36.43564587113643 - type: nauc_ndcg_at_5_max value: 46.3557986365278 - type: nauc_ndcg_at_5_std value: 43.88461405861497 - type: nauc_precision_at_1000_diff1 value: 19.248285775071935 - type: nauc_precision_at_1000_max value: 54.75027201666528 - type: nauc_precision_at_1000_std value: 57.85442302597637 - type: nauc_precision_at_100_diff1 value: 29.756268297368276 - type: nauc_precision_at_100_max value: 64.30489557431851 - type: nauc_precision_at_100_std value: 58.606646614493904 - type: nauc_precision_at_10_diff1 value: 34.10288051634421 - type: nauc_precision_at_10_max value: 52.34153820407179 - type: nauc_precision_at_10_std value: 56.999928724425644 - type: nauc_precision_at_1_diff1 value: 26.866938946939406 - type: nauc_precision_at_1_max value: 53.65404374099094 - type: nauc_precision_at_1_std value: 37.860934759045406 - type: nauc_precision_at_20_diff1 value: 33.79921393600524 - type: nauc_precision_at_20_max value: 56.236094445972796 - type: nauc_precision_at_20_std value: 57.15552085215475 - type: nauc_precision_at_3_diff1 value: 26.035425537108857 - type: nauc_precision_at_3_max value: 45.56408327261248 - type: nauc_precision_at_3_std value: 59.56195436648325 - type: nauc_precision_at_5_diff1 value: 34.84378104012192 - type: nauc_precision_at_5_max value: 49.30041620262202 - type: nauc_precision_at_5_std value: 56.6683934979334 - type: nauc_recall_at_1000_diff1 value: 30.51575548576755 - type: nauc_recall_at_1000_max value: 43.64934411599405 - type: nauc_recall_at_1000_std value: 56.84154990793133 - type: nauc_recall_at_100_diff1 value: 39.6998643462103 - type: nauc_recall_at_100_max value: 44.8373934135145 - type: nauc_recall_at_100_std value: 49.873151485862614 - type: nauc_recall_at_10_diff1 value: 24.733893615746922 - type: nauc_recall_at_10_max value: 17.48036291557653 - type: nauc_recall_at_10_std value: 26.533730432814185 - type: nauc_recall_at_1_diff1 value: 18.417203305727828 - type: nauc_recall_at_1_max value: 10.348745827965553 - type: nauc_recall_at_1_std value: 13.800647316428785 - type: nauc_recall_at_20_diff1 value: 30.64841793571244 - type: nauc_recall_at_20_max value: 25.399231149100032 - type: nauc_recall_at_20_std value: 36.03516872677545 - type: nauc_recall_at_3_diff1 value: 14.184010517448723 - type: nauc_recall_at_3_max value: 3.9055370774988845 - type: nauc_recall_at_3_std value: 26.09707135236969 - type: nauc_recall_at_5_diff1 value: 25.775613267290566 - type: nauc_recall_at_5_max value: 13.674868148818057 - type: nauc_recall_at_5_std value: 34.391050366605185 - type: ndcg_at_1 value: 30.232999999999997 - type: ndcg_at_10 value: 24.82 - type: ndcg_at_100 value: 23.547 - type: ndcg_at_1000 value: 30.558000000000003 - type: ndcg_at_20 value: 24.204 - type: ndcg_at_3 value: 27.322000000000003 - type: ndcg_at_5 value: 25.058000000000003 - type: precision_at_1 value: 44.186 - type: precision_at_10 value: 32.791 - type: precision_at_100 value: 14.860000000000001 - type: precision_at_1000 value: 3.2840000000000003 - type: precision_at_20 value: 28.255999999999997 - type: precision_at_3 value: 40.31 - type: precision_at_5 value: 35.349000000000004 - type: recall_at_1 value: 0.735 - type: recall_at_10 value: 5.367 - type: recall_at_100 value: 19.198999999999998 - type: recall_at_1000 value: 39.997 - type: recall_at_20 value: 8.486 - type: recall_at_3 value: 2.092 - type: recall_at_5 value: 2.758 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 85.46739626082991 - type: f1 value: 84.68203526638132 - type: f1_weighted value: 85.61284249538359 - type: main_score value: 85.46739626082991 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 65.46511627906978 - type: f1 value: 47.640541375476545 - type: f1_weighted value: 69.33504477285032 - type: main_score value: 65.46511627906978 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 62.96570275722932 - type: f1 value: 61.06806674831273 - type: f1_weighted value: 63.23826864499515 - type: main_score value: 62.96570275722932 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 67.0611970410222 - type: f1 value: 65.86938657402365 - type: f1_weighted value: 67.16694460005834 - type: main_score value: 67.0611970410222 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P (default) type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: main_score value: 24.46702077642377 - type: v_measure value: 24.46702077642377 - type: v_measure_std value: 1.4535352745525076 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S (default) type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 19.382712347812014 - type: v_measure value: 19.382712347812014 - type: v_measure_std value: 1.5234944494227807 - task: type: Reranking dataset: name: MTEB MindSmallReranking (default) type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: main_score value: 27.029080895625512 - type: map value: 27.029080895625512 - type: mrr value: 27.331766237183647 - type: nAUC_map_diff1 value: 13.215659465363643 - type: nAUC_map_max value: -31.94716011694344 - type: nAUC_map_std value: -19.2078629337707 - type: nAUC_mrr_diff1 value: 12.88388012914082 - type: nAUC_mrr_max value: -25.759798374458892 - type: nAUC_mrr_std value: -15.737741045947908 - task: type: Retrieval dataset: name: MTEB NFCorpus (default) type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: main_score value: 20.195 - type: map_at_1 value: 2.945 - type: map_at_10 value: 6.221 - type: map_at_100 value: 7.890999999999999 - type: map_at_1000 value: 8.904 - type: map_at_20 value: 6.827 - type: map_at_3 value: 4.744000000000001 - type: map_at_5 value: 5.469 - type: mrr_at_1 value: 26.93498452012384 - type: mrr_at_10 value: 36.720723377070115 - type: mrr_at_100 value: 37.522499628403175 - type: mrr_at_1000 value: 37.59191125990685 - type: mrr_at_20 value: 37.15237674507148 - type: mrr_at_3 value: 34.52012383900927 - type: mrr_at_5 value: 35.89783281733745 - type: nauc_map_at_1000_diff1 value: 27.79611013865157 - type: nauc_map_at_1000_max value: 25.115487651379688 - type: nauc_map_at_1000_std value: 22.447938705331307 - type: nauc_map_at_100_diff1 value: 29.07484696328452 - type: nauc_map_at_100_max value: 25.550609852486843 - type: nauc_map_at_100_std value: 18.748849434206015 - type: nauc_map_at_10_diff1 value: 32.13313134621841 - type: nauc_map_at_10_max value: 23.42371259213 - type: nauc_map_at_10_std value: 13.132452436446055 - type: nauc_map_at_1_diff1 value: 48.99604418795718 - type: nauc_map_at_1_max value: 20.94493652063089 - type: nauc_map_at_1_std value: 6.27622130781943 - type: nauc_map_at_20_diff1 value: 30.314627824479988 - type: nauc_map_at_20_max value: 24.919313882172776 - type: nauc_map_at_20_std value: 15.47713557515385 - type: nauc_map_at_3_diff1 value: 37.36695377200037 - type: nauc_map_at_3_max value: 20.859250031555383 - type: nauc_map_at_3_std value: 7.487974603438007 - type: nauc_map_at_5_diff1 value: 35.62996317877479 - type: nauc_map_at_5_max value: 22.246030893552174 - type: nauc_map_at_5_std value: 11.234461088832076 - type: nauc_mrr_at_1000_diff1 value: 27.787634466790905 - type: nauc_mrr_at_1000_max value: 26.154081073396874 - type: nauc_mrr_at_1000_std value: 21.49803908031959 - type: nauc_mrr_at_100_diff1 value: 27.775944068106096 - type: nauc_mrr_at_100_max value: 26.18134621303553 - type: nauc_mrr_at_100_std value: 21.49112111683465 - type: nauc_mrr_at_10_diff1 value: 27.66049246199066 - type: nauc_mrr_at_10_max value: 25.953362613367513 - type: nauc_mrr_at_10_std value: 21.9159895988671 - type: nauc_mrr_at_1_diff1 value: 30.047040224446768 - type: nauc_mrr_at_1_max value: 23.814650508147956 - type: nauc_mrr_at_1_std value: 13.975913248930718 - type: nauc_mrr_at_20_diff1 value: 27.738165039507905 - type: nauc_mrr_at_20_max value: 26.175963126916358 - type: nauc_mrr_at_20_std value: 21.6368886583229 - type: nauc_mrr_at_3_diff1 value: 28.920621268944362 - type: nauc_mrr_at_3_max value: 25.14792614833204 - type: nauc_mrr_at_3_std value: 21.63716383788851 - type: nauc_mrr_at_5_diff1 value: 28.29638825898445 - type: nauc_mrr_at_5_max value: 25.207905032193434 - type: nauc_mrr_at_5_std value: 21.620001994525204 - type: nauc_ndcg_at_1000_diff1 value: 23.676678242133264 - type: nauc_ndcg_at_1000_max value: 29.40819281328086 - type: nauc_ndcg_at_1000_std value: 27.48922266163637 - type: nauc_ndcg_at_100_diff1 value: 24.068151236413946 - type: nauc_ndcg_at_100_max value: 26.195824476280627 - type: nauc_ndcg_at_100_std value: 26.21807375892809 - type: nauc_ndcg_at_10_diff1 value: 21.36507362084362 - type: nauc_ndcg_at_10_max value: 21.88154065329857 - type: nauc_ndcg_at_10_std value: 30.590021666432776 - type: nauc_ndcg_at_1_diff1 value: 29.5481445398632 - type: nauc_ndcg_at_1_max value: 21.28363101652307 - type: nauc_ndcg_at_1_std value: 16.267359871177767 - type: nauc_ndcg_at_20_diff1 value: 22.786374147311257 - type: nauc_ndcg_at_20_max value: 23.71430035323994 - type: nauc_ndcg_at_20_std value: 30.948437670908152 - type: nauc_ndcg_at_3_diff1 value: 22.73384684789295 - type: nauc_ndcg_at_3_max value: 23.884749210882312 - type: nauc_ndcg_at_3_std value: 27.914342072137188 - type: nauc_ndcg_at_5_diff1 value: 20.747332983786713 - type: nauc_ndcg_at_5_max value: 21.92441825265579 - type: nauc_ndcg_at_5_std value: 29.75514293433641 - type: nauc_precision_at_1000_diff1 value: -1.65536586785613 - type: nauc_precision_at_1000_max value: -1.3001979301423146 - type: nauc_precision_at_1000_std value: 43.228651563159566 - type: nauc_precision_at_100_diff1 value: 3.1345908963206797 - type: nauc_precision_at_100_max value: 7.571791761705496 - type: nauc_precision_at_100_std value: 44.15229657763602 - type: nauc_precision_at_10_diff1 value: 7.683752240473546 - type: nauc_precision_at_10_max value: 19.5029803917141 - type: nauc_precision_at_10_std value: 38.62282334783876 - type: nauc_precision_at_1_diff1 value: 30.047040224446768 - type: nauc_precision_at_1_max value: 23.814650508147956 - type: nauc_precision_at_1_std value: 13.975913248930718 - type: nauc_precision_at_20_diff1 value: 6.930295774089716 - type: nauc_precision_at_20_max value: 18.751959496812546 - type: nauc_precision_at_20_std value: 43.43876310805847 - type: nauc_precision_at_3_diff1 value: 15.645692055073493 - type: nauc_precision_at_3_max value: 27.07516284809194 - type: nauc_precision_at_3_std value: 32.791468901313635 - type: nauc_precision_at_5_diff1 value: 9.551833738631395 - type: nauc_precision_at_5_max value: 22.195462858158265 - type: nauc_precision_at_5_std value: 35.86235073052298 - type: nauc_recall_at_1000_diff1 value: 17.10758113070841 - type: nauc_recall_at_1000_max value: 14.409721865645015 - type: nauc_recall_at_1000_std value: 7.175910246747222 - type: nauc_recall_at_100_diff1 value: 16.98022992881172 - type: nauc_recall_at_100_max value: 15.144596632597517 - type: nauc_recall_at_100_std value: 5.807717340611582 - type: nauc_recall_at_10_diff1 value: 23.064192726886542 - type: nauc_recall_at_10_max value: 16.546409463109317 - type: nauc_recall_at_10_std value: 10.660303125291867 - type: nauc_recall_at_1_diff1 value: 48.99604418795718 - type: nauc_recall_at_1_max value: 20.94493652063089 - type: nauc_recall_at_1_std value: 6.27622130781943 - type: nauc_recall_at_20_diff1 value: 19.152047964875734 - type: nauc_recall_at_20_max value: 20.717660137504122 - type: nauc_recall_at_20_std value: 10.52999190542913 - type: nauc_recall_at_3_diff1 value: 28.737014872630745 - type: nauc_recall_at_3_max value: 13.65850690480607 - type: nauc_recall_at_3_std value: 5.427736667755079 - type: nauc_recall_at_5_diff1 value: 26.4019394342239 - type: nauc_recall_at_5_max value: 15.134299251730008 - type: nauc_recall_at_5_std value: 9.576300445195523 - type: ndcg_at_1 value: 25.386999999999997 - type: ndcg_at_10 value: 20.195 - type: ndcg_at_100 value: 19.337 - type: ndcg_at_1000 value: 28.089 - type: ndcg_at_20 value: 18.741 - type: ndcg_at_3 value: 23.221 - type: ndcg_at_5 value: 22.076 - type: precision_at_1 value: 26.935 - type: precision_at_10 value: 15.076999999999998 - type: precision_at_100 value: 5.492 - type: precision_at_1000 value: 1.779 - type: precision_at_20 value: 11.315999999999999 - type: precision_at_3 value: 22.291 - type: precision_at_5 value: 19.442999999999998 - type: recall_at_1 value: 2.945 - type: recall_at_10 value: 9.578000000000001 - type: recall_at_100 value: 21.876 - type: recall_at_1000 value: 52.305 - type: recall_at_20 value: 12.041 - type: recall_at_3 value: 5.892 - type: recall_at_5 value: 7.553 - task: type: Retrieval dataset: name: MTEB NQ (default) type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: main_score value: 16.625999999999998 - type: map_at_1 value: 7.602 - type: map_at_10 value: 13.062999999999999 - type: map_at_100 value: 13.987 - type: map_at_1000 value: 14.079 - type: map_at_20 value: 13.571 - type: map_at_3 value: 11.176 - type: map_at_5 value: 12.106 - type: mrr_at_1 value: 8.603707995365006 - type: mrr_at_10 value: 14.42836221008294 - type: mrr_at_100 value: 15.327619422421954 - type: mrr_at_1000 value: 15.409036789424253 - type: mrr_at_20 value: 14.93547472380801 - type: mrr_at_3 value: 12.47103128621087 - type: mrr_at_5 value: 13.469003476245614 - type: nauc_map_at_1000_diff1 value: 22.53140441222631 - type: nauc_map_at_1000_max value: 9.698467434337164 - type: nauc_map_at_1000_std value: 11.345286586567711 - type: nauc_map_at_100_diff1 value: 22.522109671269725 - type: nauc_map_at_100_max value: 9.676164796732111 - type: nauc_map_at_100_std value: 11.248118748580213 - type: nauc_map_at_10_diff1 value: 22.491463480004217 - type: nauc_map_at_10_max value: 9.573423511496964 - type: nauc_map_at_10_std value: 9.979232730710939 - type: nauc_map_at_1_diff1 value: 25.732381756890373 - type: nauc_map_at_1_max value: 10.84116623858562 - type: nauc_map_at_1_std value: 7.338100936490713 - type: nauc_map_at_20_diff1 value: 22.53923355716687 - type: nauc_map_at_20_max value: 9.652116335078432 - type: nauc_map_at_20_std value: 10.670127330165213 - type: nauc_map_at_3_diff1 value: 22.618120346879962 - type: nauc_map_at_3_max value: 9.086786085358039 - type: nauc_map_at_3_std value: 8.20995647201015 - type: nauc_map_at_5_diff1 value: 22.64115159473954 - type: nauc_map_at_5_max value: 9.194704682395841 - type: nauc_map_at_5_std value: 8.810417562333175 - type: nauc_mrr_at_1000_diff1 value: 22.344805318852927 - type: nauc_mrr_at_1000_max value: 9.163892702470772 - type: nauc_mrr_at_1000_std value: 11.351012340897705 - type: nauc_mrr_at_100_diff1 value: 22.336047255436036 - type: nauc_mrr_at_100_max value: 9.145224907428604 - type: nauc_mrr_at_100_std value: 11.28358245102265 - type: nauc_mrr_at_10_diff1 value: 22.25037742257287 - type: nauc_mrr_at_10_max value: 8.95546839387158 - type: nauc_mrr_at_10_std value: 10.271673610986973 - type: nauc_mrr_at_1_diff1 value: 25.910389767357838 - type: nauc_mrr_at_1_max value: 10.043907328097326 - type: nauc_mrr_at_1_std value: 7.5411580653545665 - type: nauc_mrr_at_20_diff1 value: 22.330127161074522 - type: nauc_mrr_at_20_max value: 9.103315674717512 - type: nauc_mrr_at_20_std value: 10.85259680963488 - type: nauc_mrr_at_3_diff1 value: 22.85678641354908 - type: nauc_mrr_at_3_max value: 8.525432350871027 - type: nauc_mrr_at_3_std value: 8.877916382224424 - type: nauc_mrr_at_5_diff1 value: 22.5016422227308 - type: nauc_mrr_at_5_max value: 8.71305219879408 - type: nauc_mrr_at_5_std value: 9.480157566645657 - type: nauc_ndcg_at_1000_diff1 value: 21.615613945394724 - type: nauc_ndcg_at_1000_max value: 10.140787968124906 - type: nauc_ndcg_at_1000_std value: 18.91156900295804 - type: nauc_ndcg_at_100_diff1 value: 21.449543597580128 - type: nauc_ndcg_at_100_max value: 9.764472700567374 - type: nauc_ndcg_at_100_std value: 17.00068045706022 - type: nauc_ndcg_at_10_diff1 value: 21.467825825652994 - type: nauc_ndcg_at_10_max value: 9.433691829219262 - type: nauc_ndcg_at_10_std value: 11.645671336911704 - type: nauc_ndcg_at_1_diff1 value: 26.192032369383917 - type: nauc_ndcg_at_1_max value: 9.968495759668212 - type: nauc_ndcg_at_1_std value: 7.7353705558822625 - type: nauc_ndcg_at_20_diff1 value: 21.559602636114114 - type: nauc_ndcg_at_20_max value: 9.611322723657722 - type: nauc_ndcg_at_20_std value: 13.45124376661578 - type: nauc_ndcg_at_3_diff1 value: 21.942611052570136 - type: nauc_ndcg_at_3_max value: 8.546943480026158 - type: nauc_ndcg_at_3_std value: 8.558826963066005 - type: nauc_ndcg_at_5_diff1 value: 21.81661495292013 - type: nauc_ndcg_at_5_max value: 8.814628270505972 - type: nauc_ndcg_at_5_std value: 9.553325054391859 - type: nauc_precision_at_1000_diff1 value: 12.97521395897297 - type: nauc_precision_at_1000_max value: 8.478693692040677 - type: nauc_precision_at_1000_std value: 36.05577365548163 - type: nauc_precision_at_100_diff1 value: 16.657230982970816 - type: nauc_precision_at_100_max value: 8.209079564335859 - type: nauc_precision_at_100_std value: 30.27783657644826 - type: nauc_precision_at_10_diff1 value: 19.33963457061645 - type: nauc_precision_at_10_max value: 8.214228079850216 - type: nauc_precision_at_10_std value: 15.301384981024956 - type: nauc_precision_at_1_diff1 value: 26.192032369383917 - type: nauc_precision_at_1_max value: 9.968495759668212 - type: nauc_precision_at_1_std value: 7.7353705558822625 - type: nauc_precision_at_20_diff1 value: 19.43023023951576 - type: nauc_precision_at_20_max value: 8.721068295460837 - type: nauc_precision_at_20_std value: 19.87359595692818 - type: nauc_precision_at_3_diff1 value: 20.98520268342122 - type: nauc_precision_at_3_max value: 6.997024310982154 - type: nauc_precision_at_3_std value: 9.277070159437823 - type: nauc_precision_at_5_diff1 value: 20.628506750684906 - type: nauc_precision_at_5_max value: 7.3222879491405966 - type: nauc_precision_at_5_std value: 11.105451869396907 - type: nauc_recall_at_1000_diff1 value: 18.99843982673276 - type: nauc_recall_at_1000_max value: 12.09746379039881 - type: nauc_recall_at_1000_std value: 47.137305858569114 - type: nauc_recall_at_100_diff1 value: 18.68777092563649 - type: nauc_recall_at_100_max value: 10.03569790422345 - type: nauc_recall_at_100_std value: 30.950722423284187 - type: nauc_recall_at_10_diff1 value: 19.128896089153272 - type: nauc_recall_at_10_max value: 9.482016845402566 - type: nauc_recall_at_10_std value: 14.412901077358026 - type: nauc_recall_at_1_diff1 value: 25.732381756890373 - type: nauc_recall_at_1_max value: 10.84116623858562 - type: nauc_recall_at_1_std value: 7.338100936490713 - type: nauc_recall_at_20_diff1 value: 19.030846984141448 - type: nauc_recall_at_20_max value: 9.542717362815113 - type: nauc_recall_at_20_std value: 18.266090149714877 - type: nauc_recall_at_3_diff1 value: 19.7871264081636 - type: nauc_recall_at_3_max value: 7.737796420966328 - type: nauc_recall_at_3_std value: 8.666865785409758 - type: nauc_recall_at_5_diff1 value: 19.892123057309906 - type: nauc_recall_at_5_max value: 8.298969098777071 - type: nauc_recall_at_5_std value: 10.420267011108276 - type: ndcg_at_1 value: 8.575000000000001 - type: ndcg_at_10 value: 16.625999999999998 - type: ndcg_at_100 value: 21.397 - type: ndcg_at_1000 value: 24.018 - type: ndcg_at_20 value: 18.421000000000003 - type: ndcg_at_3 value: 12.658 - type: ndcg_at_5 value: 14.338999999999999 - type: precision_at_1 value: 8.575000000000001 - type: precision_at_10 value: 3.1 - type: precision_at_100 value: 0.581 - type: precision_at_1000 value: 0.083 - type: precision_at_20 value: 1.957 - type: precision_at_3 value: 6.064 - type: precision_at_5 value: 4.577 - type: recall_at_1 value: 7.602 - type: recall_at_10 value: 26.400000000000002 - type: recall_at_100 value: 48.634 - type: recall_at_1000 value: 68.90899999999999 - type: recall_at_20 value: 33.164 - type: recall_at_3 value: 15.679000000000002 - type: recall_at_5 value: 19.602 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval (default) type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: main_score value: 72.21600000000001 - type: map_at_1 value: 55.492 - type: map_at_10 value: 67.14 - type: map_at_100 value: 67.946 - type: map_at_1000 value: 67.989 - type: map_at_20 value: 67.62899999999999 - type: map_at_3 value: 64.279 - type: map_at_5 value: 65.967 - type: mrr_at_1 value: 63.88 - type: mrr_at_10 value: 71.93389682539649 - type: mrr_at_100 value: 72.28910372486355 - type: mrr_at_1000 value: 72.3024887760509 - type: mrr_at_20 value: 72.16798297677506 - type: mrr_at_3 value: 70.188333333333 - type: mrr_at_5 value: 71.26733333333279 - type: nauc_map_at_1000_diff1 value: 68.1515295383326 - type: nauc_map_at_1000_max value: 37.54695769101746 - type: nauc_map_at_1000_std value: -12.805904377344 - type: nauc_map_at_100_diff1 value: 68.14399424614948 - type: nauc_map_at_100_max value: 37.54328801779939 - type: nauc_map_at_100_std value: -12.83845975657647 - type: nauc_map_at_10_diff1 value: 68.04097237081037 - type: nauc_map_at_10_max value: 37.19790649304174 - type: nauc_map_at_10_std value: -13.574656560807451 - type: nauc_map_at_1_diff1 value: 70.06050188284856 - type: nauc_map_at_1_max value: 32.7950423160114 - type: nauc_map_at_1_std value: -15.96831844096167 - type: nauc_map_at_20_diff1 value: 68.09197231492732 - type: nauc_map_at_20_max value: 37.385624168302684 - type: nauc_map_at_20_std value: -13.155476799236565 - type: nauc_map_at_3_diff1 value: 68.23134276838651 - type: nauc_map_at_3_max value: 36.23832837393925 - type: nauc_map_at_3_std value: -15.423833858804532 - type: nauc_map_at_5_diff1 value: 67.95900982506224 - type: nauc_map_at_5_max value: 36.53132827026241 - type: nauc_map_at_5_std value: -14.482907430203696 - type: nauc_mrr_at_1000_diff1 value: 69.62457918048828 - type: nauc_mrr_at_1000_max value: 40.07844145179273 - type: nauc_mrr_at_1000_std value: -10.644864923349227 - type: nauc_mrr_at_100_diff1 value: 69.62059876593055 - type: nauc_mrr_at_100_max value: 40.07904892244788 - type: nauc_mrr_at_100_std value: -10.637692251883314 - type: nauc_mrr_at_10_diff1 value: 69.52502303386919 - type: nauc_mrr_at_10_max value: 40.10809003322649 - type: nauc_mrr_at_10_std value: -10.684922661530145 - type: nauc_mrr_at_1_diff1 value: 72.0826342696167 - type: nauc_mrr_at_1_max value: 39.8840674644011 - type: nauc_mrr_at_1_std value: -12.897908766689145 - type: nauc_mrr_at_20_diff1 value: 69.58190352660375 - type: nauc_mrr_at_20_max value: 40.0783519699091 - type: nauc_mrr_at_20_std value: -10.629858366175634 - type: nauc_mrr_at_3_diff1 value: 69.46685839511639 - type: nauc_mrr_at_3_max value: 39.98286553507212 - type: nauc_mrr_at_3_std value: -11.679166167876408 - type: nauc_mrr_at_5_diff1 value: 69.44350507999052 - type: nauc_mrr_at_5_max value: 39.91668797347604 - type: nauc_mrr_at_5_std value: -11.060504498483011 - type: nauc_ndcg_at_1000_diff1 value: 68.08522323983172 - type: nauc_ndcg_at_1000_max value: 38.78930800558068 - type: nauc_ndcg_at_1000_std value: -9.380187466388266 - type: nauc_ndcg_at_100_diff1 value: 67.89445682736151 - type: nauc_ndcg_at_100_max value: 38.76088209944818 - type: nauc_ndcg_at_100_std value: -9.332407536563391 - type: nauc_ndcg_at_10_diff1 value: 67.32980612110863 - type: nauc_ndcg_at_10_max value: 38.20460047799145 - type: nauc_ndcg_at_10_std value: -11.08956339625659 - type: nauc_ndcg_at_1_diff1 value: 72.02112312263394 - type: nauc_ndcg_at_1_max value: 39.88906073001357 - type: nauc_ndcg_at_1_std value: -12.890119245130952 - type: nauc_ndcg_at_20_diff1 value: 67.57306809180233 - type: nauc_ndcg_at_20_max value: 38.344690097960274 - type: nauc_ndcg_at_20_std value: -10.361436571647312 - type: nauc_ndcg_at_3_diff1 value: 67.3468184467274 - type: nauc_ndcg_at_3_max value: 37.71021875036499 - type: nauc_ndcg_at_3_std value: -13.237678612410885 - type: nauc_ndcg_at_5_diff1 value: 67.09372417578471 - type: nauc_ndcg_at_5_max value: 37.36947760591302 - type: nauc_ndcg_at_5_std value: -12.359281253686154 - type: nauc_precision_at_1000_diff1 value: -25.80880508117984 - type: nauc_precision_at_1000_max value: -1.6580485832415812 - type: nauc_precision_at_1000_std value: 25.560507838753338 - type: nauc_precision_at_100_diff1 value: -19.632955996672276 - type: nauc_precision_at_100_max value: 4.1425708066313245 - type: nauc_precision_at_100_std value: 24.887433001522812 - type: nauc_precision_at_10_diff1 value: 0.031433531700304586 - type: nauc_precision_at_10_max value: 16.22983398823126 - type: nauc_precision_at_10_std value: 14.947957320162104 - type: nauc_precision_at_1_diff1 value: 72.02112312263394 - type: nauc_precision_at_1_max value: 39.88906073001357 - type: nauc_precision_at_1_std value: -12.890119245130952 - type: nauc_precision_at_20_diff1 value: -8.42349247517796 - type: nauc_precision_at_20_max value: 11.629391098316281 - type: nauc_precision_at_20_std value: 19.70570016448179 - type: nauc_precision_at_3_diff1 value: 26.589087035439178 - type: nauc_precision_at_3_max value: 26.505762530737552 - type: nauc_precision_at_3_std value: 0.5240934369043053 - type: nauc_precision_at_5_diff1 value: 13.15610105266614 - type: nauc_precision_at_5_max value: 20.826382527147143 - type: nauc_precision_at_5_std value: 7.122292405395125 - type: nauc_recall_at_1000_diff1 value: 60.43306391136544 - type: nauc_recall_at_1000_max value: 39.04457194993258 - type: nauc_recall_at_1000_std value: 48.10736929427494 - type: nauc_recall_at_100_diff1 value: 57.50786726720306 - type: nauc_recall_at_100_max value: 36.96036359746097 - type: nauc_recall_at_100_std value: 17.43991349542963 - type: nauc_recall_at_10_diff1 value: 59.55969023354459 - type: nauc_recall_at_10_max value: 35.77811946712584 - type: nauc_recall_at_10_std value: -5.308872679370513 - type: nauc_recall_at_1_diff1 value: 70.06050188284856 - type: nauc_recall_at_1_max value: 32.7950423160114 - type: nauc_recall_at_1_std value: -15.96831844096167 - type: nauc_recall_at_20_diff1 value: 59.36231963477334 - type: nauc_recall_at_20_max value: 35.74083065802152 - type: nauc_recall_at_20_std value: 0.10479581364017149 - type: nauc_recall_at_3_diff1 value: 62.98815867302664 - type: nauc_recall_at_3_max value: 34.35053076037745 - type: nauc_recall_at_3_std value: -14.653656328573552 - type: nauc_recall_at_5_diff1 value: 60.83589763557177 - type: nauc_recall_at_5_max value: 33.67596252960681 - type: nauc_recall_at_5_std value: -10.962319835832373 - type: ndcg_at_1 value: 63.91 - type: ndcg_at_10 value: 72.21600000000001 - type: ndcg_at_100 value: 74.895 - type: ndcg_at_1000 value: 75.606 - type: ndcg_at_20 value: 73.41499999999999 - type: ndcg_at_3 value: 68.27600000000001 - type: ndcg_at_5 value: 70.243 - type: precision_at_1 value: 63.91 - type: precision_at_10 value: 10.947999999999999 - type: precision_at_100 value: 1.351 - type: precision_at_1000 value: 0.148 - type: precision_at_20 value: 5.94 - type: precision_at_3 value: 29.473 - type: precision_at_5 value: 19.6 - type: recall_at_1 value: 55.492 - type: recall_at_10 value: 82.191 - type: recall_at_100 value: 93.01700000000001 - type: recall_at_1000 value: 97.548 - type: recall_at_20 value: 86.37899999999999 - type: recall_at_3 value: 70.977 - type: recall_at_5 value: 76.347 - task: type: Clustering dataset: name: MTEB RedditClustering (default) type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 20.032944959465006 - type: v_measure value: 20.032944959465006 - type: v_measure_std value: 3.977494953209651 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P (default) type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: main_score value: 35.59444866525171 - type: v_measure value: 35.59444866525171 - type: v_measure_std value: 9.825394674707393 - task: type: Retrieval dataset: name: MTEB SCIDOCS (default) type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: main_score value: 9.735000000000001 - type: map_at_1 value: 2.29 - type: map_at_10 value: 5.306 - type: map_at_100 value: 6.419 - type: map_at_1000 value: 6.643000000000001 - type: map_at_20 value: 5.821 - type: map_at_3 value: 3.9280000000000004 - type: map_at_5 value: 4.569 - type: mrr_at_1 value: 11.3 - type: mrr_at_10 value: 17.82924603174603 - type: mrr_at_100 value: 19.05406312305304 - type: mrr_at_1000 value: 19.166442675275487 - type: mrr_at_20 value: 18.541102740139127 - type: mrr_at_3 value: 15.5 - type: mrr_at_5 value: 16.770000000000003 - type: nauc_map_at_1000_diff1 value: 16.74948758074446 - type: nauc_map_at_1000_max value: 16.272314062078568 - type: nauc_map_at_1000_std value: 9.802528912458712 - type: nauc_map_at_100_diff1 value: 16.799538408566203 - type: nauc_map_at_100_max value: 16.161996440884547 - type: nauc_map_at_100_std value: 9.313374283200838 - type: nauc_map_at_10_diff1 value: 17.678328509818915 - type: nauc_map_at_10_max value: 16.15778661911792 - type: nauc_map_at_10_std value: 6.28185653746446 - type: nauc_map_at_1_diff1 value: 23.710070267313093 - type: nauc_map_at_1_max value: 11.885861612261381 - type: nauc_map_at_1_std value: 2.3279474317156885 - type: nauc_map_at_20_diff1 value: 17.40871167385732 - type: nauc_map_at_20_max value: 16.071559259108398 - type: nauc_map_at_20_std value: 7.768356041713391 - type: nauc_map_at_3_diff1 value: 19.657615469261334 - type: nauc_map_at_3_max value: 13.47369716035662 - type: nauc_map_at_3_std value: 4.1278803917212645 - type: nauc_map_at_5_diff1 value: 18.1480875610142 - type: nauc_map_at_5_max value: 14.230776814076188 - type: nauc_map_at_5_std value: 3.622953870263071 - type: nauc_mrr_at_1000_diff1 value: 15.571325709422576 - type: nauc_mrr_at_1000_max value: 11.961320489344015 - type: nauc_mrr_at_1000_std value: 4.450063785196112 - type: nauc_mrr_at_100_diff1 value: 15.536886637669223 - type: nauc_mrr_at_100_max value: 11.935672425486493 - type: nauc_mrr_at_100_std value: 4.447377015945805 - type: nauc_mrr_at_10_diff1 value: 15.53551580233096 - type: nauc_mrr_at_10_max value: 12.042070930511581 - type: nauc_mrr_at_10_std value: 3.93193017344515 - type: nauc_mrr_at_1_diff1 value: 23.57876428894364 - type: nauc_mrr_at_1_max value: 12.12043352908189 - type: nauc_mrr_at_1_std value: 2.7432795333657802 - type: nauc_mrr_at_20_diff1 value: 15.495682418058236 - type: nauc_mrr_at_20_max value: 11.827749923147806 - type: nauc_mrr_at_20_std value: 4.337028857179092 - type: nauc_mrr_at_3_diff1 value: 17.374795948438287 - type: nauc_mrr_at_3_max value: 12.27547705847877 - type: nauc_mrr_at_3_std value: 3.4490922395838437 - type: nauc_mrr_at_5_diff1 value: 15.960359036123982 - type: nauc_mrr_at_5_max value: 11.88067300089711 - type: nauc_mrr_at_5_std value: 3.392937900544711 - type: nauc_ndcg_at_1000_diff1 value: 12.368330070387364 - type: nauc_ndcg_at_1000_max value: 15.835670856182855 - type: nauc_ndcg_at_1000_std value: 15.767747221405982 - type: nauc_ndcg_at_100_diff1 value: 12.811381379776785 - type: nauc_ndcg_at_100_max value: 14.882621585275865 - type: nauc_ndcg_at_100_std value: 12.918687954717855 - type: nauc_ndcg_at_10_diff1 value: 14.52700036040429 - type: nauc_ndcg_at_10_max value: 14.99153202568684 - type: nauc_ndcg_at_10_std value: 7.059926007520434 - type: nauc_ndcg_at_1_diff1 value: 23.57876428894364 - type: nauc_ndcg_at_1_max value: 12.12043352908189 - type: nauc_ndcg_at_1_std value: 2.7432795333657802 - type: nauc_ndcg_at_20_diff1 value: 14.319122424506627 - type: nauc_ndcg_at_20_max value: 14.419552299612848 - type: nauc_ndcg_at_20_std value: 9.576470751691424 - type: nauc_ndcg_at_3_diff1 value: 17.65692103341824 - type: nauc_ndcg_at_3_max value: 13.00851027147015 - type: nauc_ndcg_at_3_std value: 4.543310351593028 - type: nauc_ndcg_at_5_diff1 value: 15.317354497478568 - type: nauc_ndcg_at_5_max value: 13.129615291647509 - type: nauc_ndcg_at_5_std value: 3.616970208712892 - type: nauc_precision_at_1000_diff1 value: 4.085961395572508 - type: nauc_precision_at_1000_max value: 14.479584032109782 - type: nauc_precision_at_1000_std value: 24.822216452687034 - type: nauc_precision_at_100_diff1 value: 6.64282379200282 - type: nauc_precision_at_100_max value: 13.909179682772733 - type: nauc_precision_at_100_std value: 19.78443717745781 - type: nauc_precision_at_10_diff1 value: 10.32762182507006 - type: nauc_precision_at_10_max value: 15.932552899576436 - type: nauc_precision_at_10_std value: 9.985036232997176 - type: nauc_precision_at_1_diff1 value: 23.57876428894364 - type: nauc_precision_at_1_max value: 12.12043352908189 - type: nauc_precision_at_1_std value: 2.7432795333657802 - type: nauc_precision_at_20_diff1 value: 10.198754689783367 - type: nauc_precision_at_20_max value: 13.924027767491506 - type: nauc_precision_at_20_std value: 14.451908759139476 - type: nauc_precision_at_3_diff1 value: 15.398209236569278 - type: nauc_precision_at_3_max value: 12.545120019251204 - type: nauc_precision_at_3_std value: 5.4174103114283865 - type: nauc_precision_at_5_diff1 value: 11.38440294831457 - type: nauc_precision_at_5_max value: 12.929100768052521 - type: nauc_precision_at_5_std value: 3.8826269454849633 - type: nauc_recall_at_1000_diff1 value: 3.9395149122426343 - type: nauc_recall_at_1000_max value: 15.24096774447918 - type: nauc_recall_at_1000_std value: 24.88975234530502 - type: nauc_recall_at_100_diff1 value: 6.580896935042611 - type: nauc_recall_at_100_max value: 14.23980695602868 - type: nauc_recall_at_100_std value: 19.479776494947476 - type: nauc_recall_at_10_diff1 value: 10.311801645360179 - type: nauc_recall_at_10_max value: 15.658321964659283 - type: nauc_recall_at_10_std value: 9.723932481966557 - type: nauc_recall_at_1_diff1 value: 23.710070267313093 - type: nauc_recall_at_1_max value: 11.885861612261381 - type: nauc_recall_at_1_std value: 2.3279474317156885 - type: nauc_recall_at_20_diff1 value: 10.24091592712925 - type: nauc_recall_at_20_max value: 13.830865415282112 - type: nauc_recall_at_20_std value: 14.04027736904607 - type: nauc_recall_at_3_diff1 value: 15.41219835513545 - type: nauc_recall_at_3_max value: 12.333359628892342 - type: nauc_recall_at_3_std value: 5.024648219558654 - type: nauc_recall_at_5_diff1 value: 11.349574574035458 - type: nauc_recall_at_5_max value: 12.617927107492545 - type: nauc_recall_at_5_std value: 3.5053356310188333 - type: ndcg_at_1 value: 11.3 - type: ndcg_at_10 value: 9.735000000000001 - type: ndcg_at_100 value: 15.195 - type: ndcg_at_1000 value: 20.23 - type: ndcg_at_20 value: 11.561 - type: ndcg_at_3 value: 9.144 - type: ndcg_at_5 value: 7.953 - type: precision_at_1 value: 11.3 - type: precision_at_10 value: 5.11 - type: precision_at_100 value: 1.3050000000000002 - type: precision_at_1000 value: 0.253 - type: precision_at_20 value: 3.615 - type: precision_at_3 value: 8.533 - type: precision_at_5 value: 6.98 - type: recall_at_1 value: 2.29 - type: recall_at_10 value: 10.362 - type: recall_at_100 value: 26.457000000000004 - type: recall_at_1000 value: 51.283 - type: recall_at_20 value: 14.649999999999999 - type: recall_at_3 value: 5.175 - type: recall_at_5 value: 7.0680000000000005 - task: type: STS dataset: name: MTEB SICK-R (default) type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 72.23089156225602 - type: cosine_spearman value: 64.63447730457894 - type: euclidean_pearson value: 65.26536048964267 - type: euclidean_spearman value: 60.05876325942518 - type: main_score value: 64.63447730457894 - type: manhattan_pearson value: 63.245519161378716 - type: manhattan_spearman value: 59.28103411973211 - task: type: STS dataset: name: MTEB STS12 (default) type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 63.108487890245115 - type: cosine_spearman value: 58.06781798364534 - type: euclidean_pearson value: 51.00455103977482 - type: euclidean_spearman value: 47.056606990769154 - type: main_score value: 58.06781798364534 - type: manhattan_pearson value: 46.6691142816116 - type: manhattan_spearman value: 43.82268675196447 - task: type: STS dataset: name: MTEB STS13 (default) type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 67.9221550677534 - type: cosine_spearman value: 68.7571596382501 - type: euclidean_pearson value: 59.4362693562299 - type: euclidean_spearman value: 59.90654031756741 - type: main_score value: 68.7571596382501 - type: manhattan_pearson value: 58.84015922334945 - type: manhattan_spearman value: 58.764668284077416 - task: type: STS dataset: name: MTEB STS14 (default) type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 66.96538071580031 - type: cosine_spearman value: 65.42522405186078 - type: euclidean_pearson value: 58.34297446892109 - type: euclidean_spearman value: 57.95969868379801 - type: main_score value: 65.42522405186078 - type: manhattan_pearson value: 57.158803416050354 - type: manhattan_spearman value: 56.70345912508504 - task: type: STS dataset: name: MTEB STS15 (default) type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 74.37524523034543 - type: cosine_spearman value: 75.08524309134856 - type: euclidean_pearson value: 59.05421371900137 - type: euclidean_spearman value: 60.8963245864918 - type: main_score value: 75.08524309134856 - type: manhattan_pearson value: 58.9258972492414 - type: manhattan_spearman value: 60.102419570033106 - task: type: STS dataset: name: MTEB STS16 (default) type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 63.085067266542495 - type: cosine_spearman value: 65.38033636986424 - type: euclidean_pearson value: 52.52293105293661 - type: euclidean_spearman value: 54.599090360405086 - type: main_score value: 65.38033636986424 - type: manhattan_pearson value: 52.04583269035374 - type: manhattan_spearman value: 53.418934610254134 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 23.019969311198167 - type: cosine_spearman value: 17.411472418823667 - type: euclidean_pearson value: -15.515358361955128 - type: euclidean_spearman value: -15.677190499343482 - type: main_score value: 17.411472418823667 - type: manhattan_pearson value: -12.729052547730687 - type: manhattan_spearman value: -12.288504263696268 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 21.269195172077147 - type: cosine_spearman value: 18.575886451336775 - type: euclidean_pearson value: -10.21009784982811 - type: euclidean_spearman value: -12.92229729710694 - type: main_score value: 18.575886451336775 - type: manhattan_pearson value: -7.899161245683782 - type: manhattan_spearman value: -10.894951447088 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 4.556875032985485 - type: cosine_spearman value: 2.0609547970913806 - type: euclidean_pearson value: -11.715271322099575 - type: euclidean_spearman value: -11.045818218942449 - type: main_score value: 2.0609547970913806 - type: manhattan_pearson value: -13.961076499664834 - type: manhattan_spearman value: -13.632861374757931 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 77.65125036324002 - type: cosine_spearman value: 78.69054832378838 - type: euclidean_pearson value: 65.42262389971837 - type: euclidean_spearman value: 66.17771023288537 - type: main_score value: 78.69054832378838 - type: manhattan_pearson value: 63.99535802918511 - type: manhattan_spearman value: 64.5958799855611 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 11.304963266444723 - type: cosine_spearman value: 9.07719919328374 - type: euclidean_pearson value: -6.686339553470129 - type: euclidean_spearman value: -13.741969244577302 - type: main_score value: 9.07719919328374 - type: manhattan_pearson value: -8.751096396459193 - type: manhattan_spearman value: -15.472834128866678 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: -0.9487180608661593 - type: cosine_spearman value: -3.5467049032356264 - type: euclidean_pearson value: -22.379136687351238 - type: euclidean_spearman value: -23.937922436585392 - type: main_score value: -3.5467049032356264 - type: manhattan_pearson value: -23.462933935885573 - type: manhattan_spearman value: -22.402845778068887 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 26.412738827821325 - type: cosine_spearman value: 21.096028679832475 - type: euclidean_pearson value: -12.961356992788911 - type: euclidean_spearman value: -13.439656615197324 - type: main_score value: 21.096028679832475 - type: manhattan_pearson value: -13.312399929525135 - type: manhattan_spearman value: -13.320455244709303 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 18.315235047821027 - type: cosine_spearman value: 15.405153060148468 - type: euclidean_pearson value: -16.19883745793275 - type: euclidean_spearman value: -16.332471299959188 - type: main_score value: 15.405153060148468 - type: manhattan_pearson value: -15.174493494372754 - type: manhattan_spearman value: -14.235895631091836 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 20.46710263573031 - type: cosine_spearman value: 28.326540334389122 - type: euclidean_pearson value: 20.858737030398395 - type: euclidean_spearman value: 29.872601047020126 - type: main_score value: 28.326540334389122 - type: manhattan_pearson value: 19.218328249978722 - type: manhattan_spearman value: 33.264521141243655 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: -3.5232243177317475 - type: cosine_spearman value: 4.537053084710515 - type: euclidean_pearson value: 6.374530133957361 - type: euclidean_spearman value: 3.6684963723679562 - type: main_score value: 4.537053084710515 - type: manhattan_pearson value: 6.918896438279671 - type: manhattan_spearman value: 1.9104862843510344 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 42.353863109448966 - type: cosine_spearman value: 52.55694057880419 - type: euclidean_pearson value: 41.58894055719116 - type: euclidean_spearman value: 50.499978942016014 - type: main_score value: 52.55694057880419 - type: manhattan_pearson value: 39.23263050152607 - type: manhattan_spearman value: 47.982776818718506 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 9.895824519159905 - type: cosine_spearman value: 14.528808646639648 - type: euclidean_pearson value: 30.766730901000265 - type: euclidean_spearman value: 16.482305685897398 - type: main_score value: 14.528808646639648 - type: manhattan_pearson value: 32.72091783931039 - type: manhattan_spearman value: 11.606377075910054 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 10.835100493377169 - type: cosine_spearman value: 13.188080238562986 - type: euclidean_pearson value: 13.222129117792575 - type: euclidean_spearman value: 16.35349476750803 - type: main_score value: 13.188080238562986 - type: manhattan_pearson value: 18.24829227713276 - type: manhattan_spearman value: 21.542234667592027 - task: type: STS dataset: name: MTEB STSBenchmark (default) type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 65.71454631261894 - type: cosine_spearman value: 65.48413591571544 - type: euclidean_pearson value: 57.20872936896835 - type: euclidean_spearman value: 57.60081037404292 - type: main_score value: 65.48413591571544 - type: manhattan_pearson value: 55.60537290238107 - type: manhattan_spearman value: 56.096969186945564 - task: type: Reranking dataset: name: MTEB SciDocsRR (default) type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: main_score value: 68.56134632503664 - type: map value: 68.56134632503664 - type: mrr value: 88.76940234783373 - type: nAUC_map_diff1 value: 12.337237293429535 - type: nAUC_map_max value: 56.05626340436826 - type: nAUC_map_std value: 66.20136946235245 - type: nAUC_mrr_diff1 value: 49.13360859462996 - type: nAUC_mrr_max value: 75.19817364500312 - type: nAUC_mrr_std value: 71.27479674596098 - task: type: Retrieval dataset: name: MTEB SciFact (default) type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: main_score value: 45.168 - type: map_at_1 value: 31.722 - type: map_at_10 value: 40.361000000000004 - type: map_at_100 value: 41.412 - type: map_at_1000 value: 41.483 - type: map_at_20 value: 41.026 - type: map_at_3 value: 37.676 - type: map_at_5 value: 39.15 - type: mrr_at_1 value: 33.666666666666664 - type: mrr_at_10 value: 41.68544973544974 - type: mrr_at_100 value: 42.57351821618796 - type: mrr_at_1000 value: 42.63566974762014 - type: mrr_at_20 value: 42.24279031798382 - type: mrr_at_3 value: 39.16666666666666 - type: mrr_at_5 value: 40.56666666666666 - type: nauc_map_at_1000_diff1 value: 55.77535499706605 - type: nauc_map_at_1000_max value: 37.686384513064496 - type: nauc_map_at_1000_std value: -0.38356448588082925 - type: nauc_map_at_100_diff1 value: 55.76685805908298 - type: nauc_map_at_100_max value: 37.69512830675277 - type: nauc_map_at_100_std value: -0.3816790631470584 - type: nauc_map_at_10_diff1 value: 55.31825864401214 - type: nauc_map_at_10_max value: 37.88770668112794 - type: nauc_map_at_10_std value: -0.6860500769894244 - type: nauc_map_at_1_diff1 value: 62.113628227161165 - type: nauc_map_at_1_max value: 37.183535942278596 - type: nauc_map_at_1_std value: -3.5410481282149067 - type: nauc_map_at_20_diff1 value: 55.65751983454559 - type: nauc_map_at_20_max value: 37.69345024816029 - type: nauc_map_at_20_std value: -0.43593256548163833 - type: nauc_map_at_3_diff1 value: 55.82307496058825 - type: nauc_map_at_3_max value: 36.720146164571474 - type: nauc_map_at_3_std value: -2.819390810134275 - type: nauc_map_at_5_diff1 value: 55.53584523712401 - type: nauc_map_at_5_max value: 37.845081976188375 - type: nauc_map_at_5_std value: -1.9066901557785676 - type: nauc_mrr_at_1000_diff1 value: 56.418676727795024 - type: nauc_mrr_at_1000_max value: 38.304224136608866 - type: nauc_mrr_at_1000_std value: 2.4996505957652198 - type: nauc_mrr_at_100_diff1 value: 56.39703976435698 - type: nauc_mrr_at_100_max value: 38.31871253356022 - type: nauc_mrr_at_100_std value: 2.499322381767784 - type: nauc_mrr_at_10_diff1 value: 56.17576873119264 - type: nauc_mrr_at_10_max value: 38.63458360266209 - type: nauc_mrr_at_10_std value: 2.8572655679787973 - type: nauc_mrr_at_1_diff1 value: 63.26354576298176 - type: nauc_mrr_at_1_max value: 38.41560245413969 - type: nauc_mrr_at_1_std value: -0.17074584083479885 - type: nauc_mrr_at_20_diff1 value: 56.301767376204936 - type: nauc_mrr_at_20_max value: 38.376041663808316 - type: nauc_mrr_at_20_std value: 2.649049607362875 - type: nauc_mrr_at_3_diff1 value: 56.70849572743409 - type: nauc_mrr_at_3_max value: 37.09106878190702 - type: nauc_mrr_at_3_std value: 0.5218568736162024 - type: nauc_mrr_at_5_diff1 value: 56.116869610402674 - type: nauc_mrr_at_5_max value: 38.448039539152745 - type: nauc_mrr_at_5_std value: 1.7341042169043408 - type: nauc_ndcg_at_1000_diff1 value: 54.78225202376091 - type: nauc_ndcg_at_1000_max value: 38.38144373884326 - type: nauc_ndcg_at_1000_std value: 2.6358234061241586 - type: nauc_ndcg_at_100_diff1 value: 54.4093856226575 - type: nauc_ndcg_at_100_max value: 38.60612682388555 - type: nauc_ndcg_at_100_std value: 2.69908939213741 - type: nauc_ndcg_at_10_diff1 value: 52.832583000255795 - type: nauc_ndcg_at_10_max value: 38.941545213039916 - type: nauc_ndcg_at_10_std value: 2.4826858084884753 - type: nauc_ndcg_at_1_diff1 value: 63.26354576298176 - type: nauc_ndcg_at_1_max value: 38.41560245413969 - type: nauc_ndcg_at_1_std value: -0.17074584083479885 - type: nauc_ndcg_at_20_diff1 value: 53.5430044109149 - type: nauc_ndcg_at_20_max value: 38.10605834841827 - type: nauc_ndcg_at_20_std value: 2.5820729076155344 - type: nauc_ndcg_at_3_diff1 value: 53.98354338931932 - type: nauc_ndcg_at_3_max value: 36.522639379347815 - type: nauc_ndcg_at_3_std value: -1.9435738031229932 - type: nauc_ndcg_at_5_diff1 value: 53.263204590280175 - type: nauc_ndcg_at_5_max value: 38.76301110063584 - type: nauc_ndcg_at_5_std value: -0.44894792520114274 - type: nauc_precision_at_1000_diff1 value: 2.6725425569998733 - type: nauc_precision_at_1000_max value: 18.217728894320416 - type: nauc_precision_at_1000_std value: 41.76202644150659 - type: nauc_precision_at_100_diff1 value: 23.894022947191242 - type: nauc_precision_at_100_max value: 30.465092081989397 - type: nauc_precision_at_100_std value: 32.67941090228055 - type: nauc_precision_at_10_diff1 value: 35.758108716102925 - type: nauc_precision_at_10_max value: 38.043682768211404 - type: nauc_precision_at_10_std value: 18.94024295472207 - type: nauc_precision_at_1_diff1 value: 63.26354576298176 - type: nauc_precision_at_1_max value: 38.41560245413969 - type: nauc_precision_at_1_std value: -0.17074584083479885 - type: nauc_precision_at_20_diff1 value: 34.336560890067275 - type: nauc_precision_at_20_max value: 31.7929720931013 - type: nauc_precision_at_20_std value: 23.571932003154835 - type: nauc_precision_at_3_diff1 value: 44.2135740101036 - type: nauc_precision_at_3_max value: 34.2245562189253 - type: nauc_precision_at_3_std value: 2.9819692098799435 - type: nauc_precision_at_5_diff1 value: 40.3310935749158 - type: nauc_precision_at_5_max value: 38.63563472800203 - type: nauc_precision_at_5_std value: 9.335714313996466 - type: nauc_recall_at_1000_diff1 value: 56.9369714312583 - type: nauc_recall_at_1000_max value: 45.8389590848331 - type: nauc_recall_at_1000_std value: 36.35310239203547 - type: nauc_recall_at_100_diff1 value: 48.24197135141656 - type: nauc_recall_at_100_max value: 42.702371394909264 - type: nauc_recall_at_100_std value: 13.330140889544886 - type: nauc_recall_at_10_diff1 value: 43.30066118896596 - type: nauc_recall_at_10_max value: 40.917885858677245 - type: nauc_recall_at_10_std value: 9.071473475388245 - type: nauc_recall_at_1_diff1 value: 62.113628227161165 - type: nauc_recall_at_1_max value: 37.183535942278596 - type: nauc_recall_at_1_std value: -3.5410481282149067 - type: nauc_recall_at_20_diff1 value: 44.24119164214377 - type: nauc_recall_at_20_max value: 37.145932987172344 - type: nauc_recall_at_20_std value: 9.064570006703589 - type: nauc_recall_at_3_diff1 value: 47.503698426289645 - type: nauc_recall_at_3_max value: 35.181130291364084 - type: nauc_recall_at_3_std value: -4.399329816832574 - type: nauc_recall_at_5_diff1 value: 45.72301353292787 - type: nauc_recall_at_5_max value: 40.71394881642516 - type: nauc_recall_at_5_std value: -0.017691813104162315 - type: ndcg_at_1 value: 33.667 - type: ndcg_at_10 value: 45.168 - type: ndcg_at_100 value: 50.080000000000005 - type: ndcg_at_1000 value: 51.878 - type: ndcg_at_20 value: 47.394999999999996 - type: ndcg_at_3 value: 39.89 - type: ndcg_at_5 value: 42.418 - type: precision_at_1 value: 33.667 - type: precision_at_10 value: 6.4670000000000005 - type: precision_at_100 value: 0.9169999999999999 - type: precision_at_1000 value: 0.108 - type: precision_at_20 value: 3.733 - type: precision_at_3 value: 16.111 - type: precision_at_5 value: 11.133 - type: recall_at_1 value: 31.722 - type: recall_at_10 value: 58.833 - type: recall_at_100 value: 81.472 - type: recall_at_1000 value: 95.367 - type: recall_at_20 value: 67.333 - type: recall_at_3 value: 44.5 - type: recall_at_5 value: 50.693999999999996 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions (default) type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cosine_accuracy value: 99.6 - type: cosine_accuracy_threshold value: 70.9090530872345 - type: cosine_ap value: 84.58074609745917 - type: cosine_f1 value: 78.88324873096447 - type: cosine_f1_threshold value: 67.8337812423706 - type: cosine_precision value: 80.10309278350516 - type: cosine_recall value: 77.7 - type: dot_accuracy value: 99.08415841584159 - type: dot_accuracy_threshold value: 66384.36279296875 - type: dot_ap value: 40.87152918329808 - type: dot_f1 value: 43.734015345268546 - type: dot_f1_threshold value: 51844.3115234375 - type: dot_precision value: 38.11292719167905 - type: dot_recall value: 51.300000000000004 - type: euclidean_accuracy value: 99.34158415841584 - type: euclidean_accuracy_threshold value: 1737.0550155639648 - type: euclidean_ap value: 62.13537131791382 - type: euclidean_f1 value: 61.27982646420824 - type: euclidean_f1_threshold value: 1902.7210235595703 - type: euclidean_precision value: 66.9431279620853 - type: euclidean_recall value: 56.49999999999999 - type: main_score value: 84.58074648388171 - type: manhattan_accuracy value: 99.29306930693069 - type: manhattan_accuracy_threshold value: 31327.55126953125 - type: manhattan_ap value: 57.216782641023634 - type: manhattan_f1 value: 57.296715131933226 - type: manhattan_f1_threshold value: 35300.360107421875 - type: manhattan_precision value: 62.07701283547258 - type: manhattan_recall value: 53.2 - type: max_accuracy value: 99.6 - type: max_ap value: 84.58074648388171 - type: max_f1 value: 78.88324873096447 - type: max_precision value: 80.10309278350516 - type: max_recall value: 77.7 - type: similarity_accuracy value: 99.6 - type: similarity_accuracy_threshold value: 70.90907096862793 - type: similarity_ap value: 84.58074648388171 - type: similarity_f1 value: 78.88324873096447 - type: similarity_f1_threshold value: 67.83377528190613 - type: similarity_precision value: 80.10309278350516 - type: similarity_recall value: 77.7 - task: type: Clustering dataset: name: MTEB StackExchangeClustering (default) type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 29.912118265776584 - type: v_measure value: 29.912118265776584 - type: v_measure_std value: 4.886538571793255 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P (default) type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: main_score value: 26.453873918768515 - type: v_measure value: 26.453873918768515 - type: v_measure_std value: 1.585352021846518 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions (default) type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: main_score value: 43.20040993546698 - type: map value: 43.20040993546698 - type: mrr value: 43.80615503777269 - type: nAUC_map_diff1 value: 35.32927557160638 - type: nAUC_map_max value: 16.99796264171325 - type: nAUC_map_std value: 8.295193352979423 - type: nAUC_mrr_diff1 value: 34.8181761798891 - type: nAUC_mrr_max value: 17.88328922464567 - type: nAUC_mrr_std value: 9.16364844640502 - task: type: Summarization dataset: name: MTEB SummEval (default) type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_pearson value: 29.837020935210244 - type: cosine_spearman value: 29.129192154438023 - type: dot_pearson value: 18.178493108017275 - type: dot_spearman value: 20.21762456537728 - type: main_score value: 29.129192154438023 - type: pearson value: 29.837020935210244 - type: spearman value: 29.129192154438023 - task: type: Retrieval dataset: name: MTEB TRECCOVID (default) type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: main_score value: 44.76 - type: map_at_1 value: 0.122 - type: map_at_10 value: 0.878 - type: map_at_100 value: 4.018999999999999 - type: map_at_1000 value: 9.258 - type: map_at_20 value: 1.415 - type: map_at_3 value: 0.338 - type: map_at_5 value: 0.526 - type: mrr_at_1 value: 56.00000000000001 - type: mrr_at_10 value: 66.07222222222222 - type: mrr_at_100 value: 66.50064823204359 - type: mrr_at_1000 value: 66.51969585109121 - type: mrr_at_20 value: 66.32619047619048 - type: mrr_at_3 value: 64.0 - type: mrr_at_5 value: 65.4 - type: nauc_map_at_1000_diff1 value: -8.083047410631284 - type: nauc_map_at_1000_max value: 47.53446279402127 - type: nauc_map_at_1000_std value: 59.96216691295325 - type: nauc_map_at_100_diff1 value: -7.739773992175417 - type: nauc_map_at_100_max value: 30.194947003511906 - type: nauc_map_at_100_std value: 44.21694014053059 - type: nauc_map_at_10_diff1 value: -8.68905409160312 - type: nauc_map_at_10_max value: 1.0122820499818854 - type: nauc_map_at_10_std value: 10.974665544255386 - type: nauc_map_at_1_diff1 value: -16.880540022219577 - type: nauc_map_at_1_max value: -1.6691558276733682 - type: nauc_map_at_1_std value: 6.632235219994278 - type: nauc_map_at_20_diff1 value: -10.664828887394167 - type: nauc_map_at_20_max value: 8.898505999792377 - type: nauc_map_at_20_std value: 19.532041203224537 - type: nauc_map_at_3_diff1 value: -9.330417800583005 - type: nauc_map_at_3_max value: -2.790285962665549 - type: nauc_map_at_3_std value: 7.4958144373878115 - type: nauc_map_at_5_diff1 value: -8.040423130198358 - type: nauc_map_at_5_max value: -3.3129010825045415 - type: nauc_map_at_5_std value: 7.140151615092149 - type: nauc_mrr_at_1000_diff1 value: 1.841967269111446 - type: nauc_mrr_at_1000_max value: 19.218649788535302 - type: nauc_mrr_at_1000_std value: 34.05865638916581 - type: nauc_mrr_at_100_diff1 value: 1.7162098924657265 - type: nauc_mrr_at_100_max value: 19.23051404537602 - type: nauc_mrr_at_100_std value: 34.043079302164195 - type: nauc_mrr_at_10_diff1 value: 2.671707378955639 - type: nauc_mrr_at_10_max value: 19.61245805830406 - type: nauc_mrr_at_10_std value: 33.860985121025664 - type: nauc_mrr_at_1_diff1 value: -4.9369747899159915 - type: nauc_mrr_at_1_max value: 18.70315693845101 - type: nauc_mrr_at_1_std value: 31.19747899159659 - type: nauc_mrr_at_20_diff1 value: 2.2679812975747393 - type: nauc_mrr_at_20_max value: 18.88077606059037 - type: nauc_mrr_at_20_std value: 34.45425371871214 - type: nauc_mrr_at_3_diff1 value: 2.8102165970771873 - type: nauc_mrr_at_3_max value: 19.9547668754349 - type: nauc_mrr_at_3_std value: 32.230232254697256 - type: nauc_mrr_at_5_diff1 value: 2.056260588169657 - type: nauc_mrr_at_5_max value: 20.00122859400373 - type: nauc_mrr_at_5_std value: 33.385407684686385 - type: nauc_ndcg_at_1000_diff1 value: -10.634273510767326 - type: nauc_ndcg_at_1000_max value: 36.83968691011661 - type: nauc_ndcg_at_1000_std value: 52.736058094433346 - type: nauc_ndcg_at_100_diff1 value: 0.9900193680768492 - type: nauc_ndcg_at_100_max value: 33.837077460710816 - type: nauc_ndcg_at_100_std value: 47.8838924407509 - type: nauc_ndcg_at_10_diff1 value: -0.17969764223238982 - type: nauc_ndcg_at_10_max value: 20.98725746563983 - type: nauc_ndcg_at_10_std value: 34.94240929181837 - type: nauc_ndcg_at_1_diff1 value: -15.90606217193831 - type: nauc_ndcg_at_1_max value: 14.845386058908314 - type: nauc_ndcg_at_1_std value: 27.80603225543255 - type: nauc_ndcg_at_20_diff1 value: -2.610422392632454 - type: nauc_ndcg_at_20_max value: 23.712304742527216 - type: nauc_ndcg_at_20_std value: 37.068579726264616 - type: nauc_ndcg_at_3_diff1 value: -1.296272800008927 - type: nauc_ndcg_at_3_max value: 21.18656426647708 - type: nauc_ndcg_at_3_std value: 35.00996581698709 - type: nauc_ndcg_at_5_diff1 value: 0.9228761005863567 - type: nauc_ndcg_at_5_max value: 20.533612497239876 - type: nauc_ndcg_at_5_std value: 33.746097407453505 - type: nauc_precision_at_1000_diff1 value: 2.212860642793429 - type: nauc_precision_at_1000_max value: 42.83693570346947 - type: nauc_precision_at_1000_std value: 56.34352031668012 - type: nauc_precision_at_100_diff1 value: 3.0398947714805473 - type: nauc_precision_at_100_max value: 37.33236107395733 - type: nauc_precision_at_100_std value: 51.46402436623219 - type: nauc_precision_at_10_diff1 value: 7.751232774751116 - type: nauc_precision_at_10_max value: 23.34708251923681 - type: nauc_precision_at_10_std value: 35.85367282451008 - type: nauc_precision_at_1_diff1 value: -4.9369747899159915 - type: nauc_precision_at_1_max value: 18.70315693845101 - type: nauc_precision_at_1_std value: 31.19747899159659 - type: nauc_precision_at_20_diff1 value: 2.6773822842226416 - type: nauc_precision_at_20_max value: 27.773465147606125 - type: nauc_precision_at_20_std value: 40.8346461486944 - type: nauc_precision_at_3_diff1 value: 10.025088532578964 - type: nauc_precision_at_3_max value: 23.118618169053402 - type: nauc_precision_at_3_std value: 36.718048256708336 - type: nauc_precision_at_5_diff1 value: 10.65022351628208 - type: nauc_precision_at_5_max value: 21.415166686410064 - type: nauc_precision_at_5_std value: 34.26813225180961 - type: nauc_recall_at_1000_diff1 value: -15.087404046972116 - type: nauc_recall_at_1000_max value: 36.36800488936171 - type: nauc_recall_at_1000_std value: 51.729821669192646 - type: nauc_recall_at_100_diff1 value: -10.615762204096805 - type: nauc_recall_at_100_max value: 24.08701047895384 - type: nauc_recall_at_100_std value: 39.67258536375483 - type: nauc_recall_at_10_diff1 value: -7.067104621282379 - type: nauc_recall_at_10_max value: -1.9673720028196857 - type: nauc_recall_at_10_std value: 5.8769977919557785 - type: nauc_recall_at_1_diff1 value: -16.880540022219577 - type: nauc_recall_at_1_max value: -1.6691558276733682 - type: nauc_recall_at_1_std value: 6.632235219994278 - type: nauc_recall_at_20_diff1 value: -10.004017517116134 - type: nauc_recall_at_20_max value: 4.75366175077321 - type: nauc_recall_at_20_std value: 17.49313281300582 - type: nauc_recall_at_3_diff1 value: 0.5629010662361658 - type: nauc_recall_at_3_max value: -7.882772867263189 - type: nauc_recall_at_3_std value: 2.238252718990748 - type: nauc_recall_at_5_diff1 value: -2.374440704673045 - type: nauc_recall_at_5_max value: -6.804152379891169 - type: nauc_recall_at_5_std value: 1.5154169968307243 - type: ndcg_at_1 value: 50.0 - type: ndcg_at_10 value: 44.76 - type: ndcg_at_100 value: 31.022 - type: ndcg_at_1000 value: 26.223000000000003 - type: ndcg_at_20 value: 41.703 - type: ndcg_at_3 value: 49.838 - type: ndcg_at_5 value: 48.219 - type: precision_at_1 value: 56.00000000000001 - type: precision_at_10 value: 48.0 - type: precision_at_100 value: 31.66 - type: precision_at_1000 value: 12.598 - type: precision_at_20 value: 44.1 - type: precision_at_3 value: 55.333 - type: precision_at_5 value: 52.400000000000006 - type: recall_at_1 value: 0.122 - type: recall_at_10 value: 1.093 - type: recall_at_100 value: 6.6339999999999995 - type: recall_at_1000 value: 24.934 - type: recall_at_20 value: 1.926 - type: recall_at_3 value: 0.379 - type: recall_at_5 value: 0.611 - task: type: Retrieval dataset: name: MTEB Touche2020 (default) type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: main_score value: 11.737 - type: map_at_1 value: 0.86 - type: map_at_10 value: 3.569 - type: map_at_100 value: 6.272 - type: map_at_1000 value: 7.591 - type: map_at_20 value: 4.599 - type: map_at_3 value: 2.1229999999999998 - type: map_at_5 value: 2.738 - type: mrr_at_1 value: 14.285714285714285 - type: mrr_at_10 value: 29.435536119209587 - type: mrr_at_100 value: 30.863925639814255 - type: mrr_at_1000 value: 30.863925639814255 - type: mrr_at_20 value: 30.459159854417955 - type: mrr_at_3 value: 25.510204081632658 - type: mrr_at_5 value: 27.34693877551021 - type: nauc_map_at_1000_diff1 value: -15.225998878041644 - type: nauc_map_at_1000_max value: -37.62784726123152 - type: nauc_map_at_1000_std value: -40.49662774337752 - type: nauc_map_at_100_diff1 value: -17.241253449657865 - type: nauc_map_at_100_max value: -39.87742899339114 - type: nauc_map_at_100_std value: -43.461254035113015 - type: nauc_map_at_10_diff1 value: -18.2332059968299 - type: nauc_map_at_10_max value: -33.098533635572316 - type: nauc_map_at_10_std value: -36.84786857582744 - type: nauc_map_at_1_diff1 value: -14.429325321729767 - type: nauc_map_at_1_max value: -27.646469766953775 - type: nauc_map_at_1_std value: -22.319540072780857 - type: nauc_map_at_20_diff1 value: -20.20731257532461 - type: nauc_map_at_20_max value: -38.80220712468868 - type: nauc_map_at_20_std value: -42.26801449643297 - type: nauc_map_at_3_diff1 value: -20.779843046007446 - type: nauc_map_at_3_max value: -39.53842231266448 - type: nauc_map_at_3_std value: -33.56558692084304 - type: nauc_map_at_5_diff1 value: -19.66219267837773 - type: nauc_map_at_5_max value: -37.06326821351946 - type: nauc_map_at_5_std value: -36.957816069501106 - type: nauc_mrr_at_1000_diff1 value: -18.677101035122053 - type: nauc_mrr_at_1000_max value: -35.95960963659799 - type: nauc_mrr_at_1000_std value: -37.756381781688766 - type: nauc_mrr_at_100_diff1 value: -18.677101035122053 - type: nauc_mrr_at_100_max value: -35.95960963659799 - type: nauc_mrr_at_100_std value: -37.756381781688766 - type: nauc_mrr_at_10_diff1 value: -18.191174363420938 - type: nauc_mrr_at_10_max value: -36.36477111799858 - type: nauc_mrr_at_10_std value: -39.49983032196089 - type: nauc_mrr_at_1_diff1 value: -12.86145482800598 - type: nauc_mrr_at_1_max value: -24.487052771897265 - type: nauc_mrr_at_1_std value: -20.52556557495329 - type: nauc_mrr_at_20_diff1 value: -18.60997224510311 - type: nauc_mrr_at_20_max value: -35.79812432900392 - type: nauc_mrr_at_20_std value: -38.30897001988249 - type: nauc_mrr_at_3_diff1 value: -25.212140640066988 - type: nauc_mrr_at_3_max value: -37.42857037379736 - type: nauc_mrr_at_3_std value: -33.92966300567053 - type: nauc_mrr_at_5_diff1 value: -20.640207781943023 - type: nauc_mrr_at_5_max value: -35.90540839091833 - type: nauc_mrr_at_5_std value: -37.12194516618917 - type: nauc_ndcg_at_1000_diff1 value: -0.11963001842743652 - type: nauc_ndcg_at_1000_max value: -27.9178453384242 - type: nauc_ndcg_at_1000_std value: -29.166624762081454 - type: nauc_ndcg_at_100_diff1 value: -12.091987337723797 - type: nauc_ndcg_at_100_max value: -40.82288385710299 - type: nauc_ndcg_at_100_std value: -46.76058302199178 - type: nauc_ndcg_at_10_diff1 value: -15.828838900116663 - type: nauc_ndcg_at_10_max value: -28.47740914640201 - type: nauc_ndcg_at_10_std value: -39.61604315349557 - type: nauc_ndcg_at_1_diff1 value: -14.384548055467114 - type: nauc_ndcg_at_1_max value: -22.305774061633038 - type: nauc_ndcg_at_1_std value: -21.059675286871425 - type: nauc_ndcg_at_20_diff1 value: -18.484696865224056 - type: nauc_ndcg_at_20_max value: -36.75133962699779 - type: nauc_ndcg_at_20_std value: -45.00325838241873 - type: nauc_ndcg_at_3_diff1 value: -19.074080663504287 - type: nauc_ndcg_at_3_max value: -32.15749618445631 - type: nauc_ndcg_at_3_std value: -31.15778856351426 - type: nauc_ndcg_at_5_diff1 value: -17.075509240224072 - type: nauc_ndcg_at_5_max value: -30.166046803360015 - type: nauc_ndcg_at_5_std value: -35.59973493388717 - type: nauc_precision_at_1000_diff1 value: 21.84245546736574 - type: nauc_precision_at_1000_max value: 38.516370901785876 - type: nauc_precision_at_1000_std value: 35.95207951618072 - type: nauc_precision_at_100_diff1 value: 1.3876384351895321 - type: nauc_precision_at_100_max value: -17.672181963540233 - type: nauc_precision_at_100_std value: -35.100445067927325 - type: nauc_precision_at_10_diff1 value: -8.38470122188378 - type: nauc_precision_at_10_max value: -21.522897385575003 - type: nauc_precision_at_10_std value: -42.22825505115226 - type: nauc_precision_at_1_diff1 value: -12.86145482800598 - type: nauc_precision_at_1_max value: -24.487052771897265 - type: nauc_precision_at_1_std value: -20.52556557495329 - type: nauc_precision_at_20_diff1 value: -16.93969917788429 - type: nauc_precision_at_20_max value: -30.66989763742793 - type: nauc_precision_at_20_std value: -46.641569381752156 - type: nauc_precision_at_3_diff1 value: -20.209351145881417 - type: nauc_precision_at_3_max value: -37.489404692159376 - type: nauc_precision_at_3_std value: -36.11843668070083 - type: nauc_precision_at_5_diff1 value: -13.00046064709639 - type: nauc_precision_at_5_max value: -29.182846254852958 - type: nauc_precision_at_5_std value: -41.475754864735954 - type: nauc_recall_at_1000_diff1 value: 12.384650251660787 - type: nauc_recall_at_1000_max value: -22.150720232837372 - type: nauc_recall_at_1000_std value: -4.87263784450895 - type: nauc_recall_at_100_diff1 value: -10.460274590185362 - type: nauc_recall_at_100_max value: -46.395760301872606 - type: nauc_recall_at_100_std value: -44.967105074272865 - type: nauc_recall_at_10_diff1 value: -15.886566681130422 - type: nauc_recall_at_10_max value: -36.08360858042893 - type: nauc_recall_at_10_std value: -43.44706180483 - type: nauc_recall_at_1_diff1 value: -14.429325321729767 - type: nauc_recall_at_1_max value: -27.646469766953775 - type: nauc_recall_at_1_std value: -22.319540072780857 - type: nauc_recall_at_20_diff1 value: -20.572085163574663 - type: nauc_recall_at_20_max value: -45.09259936557314 - type: nauc_recall_at_20_std value: -50.36930511127456 - type: nauc_recall_at_3_diff1 value: -25.55698987960452 - type: nauc_recall_at_3_max value: -44.841701912628395 - type: nauc_recall_at_3_std value: -33.629299677212664 - type: nauc_recall_at_5_diff1 value: -21.025629383069223 - type: nauc_recall_at_5_max value: -41.163164440917406 - type: nauc_recall_at_5_std value: -40.978074434880654 - type: ndcg_at_1 value: 13.264999999999999 - type: ndcg_at_10 value: 11.737 - type: ndcg_at_100 value: 20.893 - type: ndcg_at_1000 value: 34.148 - type: ndcg_at_20 value: 12.781 - type: ndcg_at_3 value: 13.961000000000002 - type: ndcg_at_5 value: 12.735 - type: precision_at_1 value: 14.285999999999998 - type: precision_at_10 value: 11.429 - type: precision_at_100 value: 5.061 - type: precision_at_1000 value: 1.327 - type: precision_at_20 value: 9.796000000000001 - type: precision_at_3 value: 17.007 - type: precision_at_5 value: 14.693999999999999 - type: recall_at_1 value: 0.86 - type: recall_at_10 value: 7.962 - type: recall_at_100 value: 31.343 - type: recall_at_1000 value: 72.173 - type: recall_at_20 value: 13.209000000000001 - type: recall_at_3 value: 3.4639999999999995 - type: recall_at_5 value: 5.061 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification (default) type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 63.30078125000001 - type: ap value: 10.382758929598857 - type: ap_weighted value: 10.382758929598857 - type: f1 value: 47.95923360740176 - type: f1_weighted value: 71.3431138095925 - type: main_score value: 63.30078125000001 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification (default) type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 49.787775891341255 - type: f1 value: 49.934050367781495 - type: f1_weighted value: 49.25778188511025 - type: main_score value: 49.787775891341255 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering (default) type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 20.13387853092354 - type: v_measure value: 20.13387853092354 - type: v_measure_std value: 2.2532678030932582 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 (default) type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cosine_accuracy value: 82.44620611551528 - type: cosine_accuracy_threshold value: 67.1613335609436 - type: cosine_ap value: 61.027812391627634 - type: cosine_f1 value: 57.648077160875474 - type: cosine_f1_threshold value: 60.86677312850952 - type: cosine_precision value: 54.24714917384221 - type: cosine_recall value: 61.50395778364116 - type: dot_accuracy value: 78.45860404124694 - type: dot_accuracy_threshold value: 83239.31884765625 - type: dot_ap value: 44.32467940837404 - type: dot_f1 value: 47.685779137471634 - type: dot_f1_threshold value: 55795.2392578125 - type: dot_precision value: 38.08923222449945 - type: dot_recall value: 63.746701846965706 - type: euclidean_accuracy value: 80.58055671454967 - type: euclidean_accuracy_threshold value: 2302.2579193115234 - type: euclidean_ap value: 55.2462162515812 - type: euclidean_f1 value: 54.27702017356023 - type: euclidean_f1_threshold value: 2842.241096496582 - type: euclidean_precision value: 47.37359826873893 - type: euclidean_recall value: 63.53562005277045 - type: main_score value: 61.027837268240226 - type: manhattan_accuracy value: 80.77129403349824 - type: manhattan_accuracy_threshold value: 43584.36279296875 - type: manhattan_ap value: 56.045117634111655 - type: manhattan_f1 value: 54.80427046263346 - type: manhattan_f1_threshold value: 51295.8740234375 - type: manhattan_precision value: 49.78448275862069 - type: manhattan_recall value: 60.94986807387863 - type: max_accuracy value: 82.44620611551528 - type: max_ap value: 61.027837268240226 - type: max_f1 value: 57.648077160875474 - type: max_precision value: 54.24714917384221 - type: max_recall value: 63.746701846965706 - type: similarity_accuracy value: 82.44620611551528 - type: similarity_accuracy_threshold value: 67.1613335609436 - type: similarity_ap value: 61.027837268240226 - type: similarity_f1 value: 57.648077160875474 - type: similarity_f1_threshold value: 60.866761207580566 - type: similarity_precision value: 54.24714917384221 - type: similarity_recall value: 61.50395778364116 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus (default) type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cosine_accuracy value: 86.56032910311639 - type: cosine_accuracy_threshold value: 63.58056664466858 - type: cosine_ap value: 80.36069089360147 - type: cosine_f1 value: 72.49717349283344 - type: cosine_f1_threshold value: 57.18348026275635 - type: cosine_precision value: 68.87256600374194 - type: cosine_recall value: 76.52448413920541 - type: dot_accuracy value: 83.29064307059417 - type: dot_accuracy_threshold value: 39571.136474609375 - type: dot_ap value: 70.9168154298791 - type: dot_f1 value: 65.80363636363637 - type: dot_f1_threshold value: 33795.39489746094 - type: dot_precision value: 62.348401323043 - type: dot_recall value: 69.66430551278103 - type: euclidean_accuracy value: 83.87472348352544 - type: euclidean_accuracy_threshold value: 1921.6852188110352 - type: euclidean_ap value: 72.19667035000438 - type: euclidean_f1 value: 64.49932928272706 - type: euclidean_f1_threshold value: 2122.101593017578 - type: euclidean_precision value: 66.14338889787992 - type: euclidean_recall value: 62.935016938712664 - type: main_score value: 80.36069259910931 - type: manhattan_accuracy value: 83.8514378856677 - type: manhattan_accuracy_threshold value: 35123.6572265625 - type: manhattan_ap value: 72.24797710989144 - type: manhattan_f1 value: 64.65182603184662 - type: manhattan_f1_threshold value: 38842.54150390625 - type: manhattan_precision value: 66.57692935225975 - type: manhattan_recall value: 62.83492454573453 - type: max_accuracy value: 86.56032910311639 - type: max_ap value: 80.36069259910931 - type: max_f1 value: 72.49717349283344 - type: max_precision value: 68.87256600374194 - type: max_recall value: 76.52448413920541 - type: similarity_accuracy value: 86.56032910311639 - type: similarity_accuracy_threshold value: 63.58058452606201 - type: similarity_ap value: 80.36069259910931 - type: similarity_f1 value: 72.49717349283344 - type: similarity_f1_threshold value: 57.18348026275635 - type: similarity_precision value: 68.87256600374194 - type: similarity_recall value: 76.52448413920541 --- # 🪲 brown-beetle-base-v1 Model Card <div align="center"> <img width="75%" alt="Beetle logo" src="./assets/beetle_logo.png"> </div> > [!TIP] > Beetles are some of the most diverse and interesting creatures on Earth. They are found in every environment, from the deepest oceans to the highest mountains. They are also known for their ability to adapt to a wide range of habitats and lifestyles. They are small, fast and powerful! The beetle series of models are made as good starting points for Static Embedding training (via TokenLearn or Fine-tuning), as well as decent Static Embedding models. Each beetle model is made to be an improvement over the original **M2V_base_output** model in some way, and that's the threshold we set for each model (except the brown beetle series, which is the original model). This model has been distilled from `baai/bge-base-en-v1.5`, with PCA but of the same size as the original model and applying Zipf. > [!NOTE] > The brown beetle series is made for convinience in loading and using the model instead of having to run it, though it is pretty fast to reproduce anyways. If you want to use the original model by the folks from the Minish Lab, you can use the **M2V_base_output** model. ## Version Information - **brown-beetle-base-v0**: The original model, without using PCA or Zipf. The lack of PCA and Zipf also makes this a decent model for further training. - **brown-beetle-base-v0.1**: The original model, with PCA but of the same size as the original model. This model is great if you want to experiment with Zipf or other weighting methods. - **brown-beetle-base-v1**: The original model, with PCA and Zipf. - **brown-beetle-small-v1**: A smaller version of the original model, with PCA and Zipf. Equivalent to **M2V_base_output**. - **brown-beetle-tiny-v1**: A tiny version of the original model, with PCA and Zipf. - **brown-beetle-base-v1.1**: The original model, with PCA with 768 dimensions, applying Zipf and applying SIF re-weighting, learnt from a subset of the C4 corpus. This model is significantly better than the M2V_base_output model. - **brown-beetle-small-v1.1**: A smaller version of the original model, with PCA with 256 dimensions, applying Zipf and applying SIF re-weighting, learnt from a subset of the C4 corpus. This model is significantly better than the M2V_base_output model but slightly worse than the brown-beetle-base-v1.1 model. - **brown-beetle-tiny-v1.1**: A tiny version of the original model, with PCA with 128 dimensions, applying Zipf and applying SIF re-weighting, learnt from a subset of the C4 corpus. This model is significantly better than the M2V_base_output model but slightly worse than the brown-beetle-small-v1.1 model. ## Installation Install model2vec using pip: ```bash pip install model2vec ``` ## Usage Load this model using the `from_pretrained` method: ```python from model2vec import StaticModel # Load a pretrained Model2Vec model model = StaticModel.from_pretrained("bhavnicksm/brown-beetle-base-v1") # Compute text embeddings embeddings = model.encode(["Example sentence"]) ``` Read more about the Model2Vec library [here](https://github.com/MinishLab/model2vec). ## Reproduce this model To reproduce this model, you must install the `model2vec[distill]` package and use the following code: ```python from model2vec.distill import distill # Distill the model m2v_model = distill( model_name="bge-base-en-v1.5", pca_dims=768, apply_zipf=True, ) # Save the model m2v_model.save_pretrained("brown-beetle-base-v1") ``` ## Comparison with other models Coming soon... ## Acknowledgements This model is made using the [Model2Vec](https://github.com/MinishLab/model2vec) library. Credit goes to the [Minish Lab](https://github.com/MinishLab) team for developing this library. ## Citation Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work. ```bibtex @software{minishlab2024model2vec, authors = {Stephan Tulkens, Thomas van Dongen}, title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model}, year = {2024}, url = {https://github.com/MinishLab/model2vec}, } ```
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
# 🪲 brown-beetle-base-v1 Model Card <div align="center"> <img width="75%" alt="Beetle logo" src="./assets/beetle_logo.png"> </div> > [!TIP] > Beetles are some of the most diverse and interesting creatures on Earth. They are found in every environment, from the deepest oceans to the highest mountains. They are also known for their ability to adapt to a wide range of habitats and lifestyles. They are small, fast and powerful! The beetle series of models are made as good starting points for Static Embedding training (via TokenLearn or Fine-tuning), as well as decent Static Embedding models. Each beetle model is made to be an improvement over the original **M2V_base_output** model in some way, and that's the threshold we set for each model (except the brown beetle series, which is the original model). This model has been distilled from `baai/bge-base-en-v1.5`, with PCA but of the same size as the original model and applying Zipf. > [!NOTE] > The brown beetle series is made for convinience in loading and using the model instead of having to run it, though it is pretty fast to reproduce anyways. If you want to use the original model by the folks from the Minish Lab, you can use the **M2V_base_output** model. ## Version Information - **brown-beetle-base-v0**: The original model, without using PCA or Zipf. The lack of PCA and Zipf also makes this a decent model for further training. - **brown-beetle-base-v0.1**: The original model, with PCA but of the same size as the original model. This model is great if you want to experiment with Zipf or other weighting methods. - **brown-beetle-base-v1**: The original model, with PCA and Zipf. - **brown-beetle-small-v1**: A smaller version of the original model, with PCA and Zipf. Equivalent to **M2V_base_output**. - **brown-beetle-tiny-v1**: A tiny version of the original model, with PCA and Zipf. - **brown-beetle-base-v1.1**: The original model, with PCA with 768 dimensions, applying Zipf and applying SIF re-weighting, learnt from a subset of the C4 corpus. This model is significantly better than the M2V_base_output model. - **brown-beetle-small-v1.1**: A smaller version of the original model, with PCA with 256 dimensions, applying Zipf and applying SIF re-weighting, learnt from a subset of the C4 corpus. This model is significantly better than the M2V_base_output model but slightly worse than the brown-beetle-base-v1.1 model. - **brown-beetle-tiny-v1.1**: A tiny version of the original model, with PCA with 128 dimensions, applying Zipf and applying SIF re-weighting, learnt from a subset of the C4 corpus. This model is significantly better than the M2V_base_output model but slightly worse than the brown-beetle-small-v1.1 model. ## Installation Install model2vec using pip: ```bash pip install model2vec ``` ## Usage Load this model using the `from_pretrained` method: ```python from model2vec import StaticModel # Load a pretrained Model2Vec model model = StaticModel.from_pretrained("bhavnicksm/brown-beetle-base-v1") # Compute text embeddings embeddings = model.encode(["Example sentence"]) ``` Read more about the Model2Vec library [here](https://github.com/MinishLab/model2vec). ## Reproduce this model To reproduce this model, you must install the `model2vec[distill]` package and use the following code: ```python from model2vec.distill import distill # Distill the model m2v_model = distill( model_name="bge-base-en-v1.5", pca_dims=768, apply_zipf=True, ) # Save the model m2v_model.save_pretrained("brown-beetle-base-v1") ``` ## Comparison with other models Coming soon... ## Acknowledgements This model is made using the [Model2Vec](https://github.com/MinishLab/model2vec) library. Credit goes to the [Minish Lab](https://github.com/MinishLab) team for developing this library. ## Citation Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work. ```bibtex @software{minishlab2024model2vec, authors = {Stephan Tulkens, Thomas van Dongen}, title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model}, year = {2024}, url = {https://github.com/MinishLab/model2vec}, } ```
{"base_model": "baai/bge-base-en-v1.5", "language": ["en"], "library_name": "model2vec", "license": "mit", "tags": ["embeddings", "static-embeddings", "sentence-transformers", "mteb"], "model-index": [{"name": "brown-beetle-base-v1", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en-ext)", "type": "mteb/amazon_counterfactual", "config": "en-ext", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 70.90704647676162}, {"type": "ap", "value": 20.809576527783648}, {"type": "ap_weighted", "value": 20.809576527783648}, {"type": "f1", "value": 58.63593463335343}, {"type": "f1_weighted", "value": 76.3522601923032}, {"type": "main_score", "value": 70.90704647676162}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 72.44776119402985}, {"type": "ap", "value": 35.37456318192898}, {"type": "ap_weighted", "value": 35.37456318192898}, {"type": "f1", "value": 66.61214896368735}, {"type": "f1_weighted", "value": 75.10012201186763}, {"type": "main_score", "value": 72.44776119402985}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification (default)", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 66.56272500000001}, {"type": "ap", "value": 61.65156042833797}, {"type": "ap_weighted", "value": 61.65156042833797}, {"type": "f1", "value": 66.05067668571694}, {"type": "f1_weighted", "value": 66.05067668571694}, {"type": "main_score", "value": 66.56272500000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 32.804}, {"type": "f1", "value": 32.191562227975325}, {"type": "f1_weighted", "value": 32.191562227975325}, {"type": "main_score", "value": 32.804}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna (default)", "type": "mteb/arguana", "config": "default", "split": "test", "revision": "c22ab2a51041ffd869aaddef7af8d8215647e41a"}, "metrics": [{"type": "main_score", "value": 27.472}, {"type": "map_at_1", "value": 12.518}, {"type": "map_at_10", "value": 22.112000000000002}, {"type": "map_at_100", "value": 23.113}, {"type": "map_at_1000", "value": 23.194}, {"type": "map_at_20", "value": 22.689}, {"type": "map_at_3", "value": 19.262999999999998}, {"type": "map_at_5", "value": 20.838}, {"type": "mrr_at_1", "value": 12.873399715504979}, {"type": "mrr_at_10", "value": 22.245760798392343}, {"type": "mrr_at_100", "value": 23.2535995386754}, {"type": "mrr_at_1000", "value": 23.334415424798767}, {"type": "mrr_at_20", "value": 22.832835930440567}, {"type": "mrr_at_3", "value": 19.369369369369373}, {"type": "mrr_at_5", "value": 21.005215742057814}, {"type": "nauc_map_at_1000_diff1", "value": 8.530112404211962}, {"type": "nauc_map_at_1000_max", "value": -0.610229019588028}, {"type": "nauc_map_at_1000_std", "value": 13.439326858512171}, {"type": "nauc_map_at_100_diff1", "value": 8.5589640159379}, {"type": "nauc_map_at_100_max", "value": -0.5944016197162708}, {"type": "nauc_map_at_100_std", "value": 13.489115621758796}, {"type": "nauc_map_at_10_diff1", "value": 8.126257265087254}, {"type": "nauc_map_at_10_max", "value": -0.9104792257460274}, {"type": "nauc_map_at_10_std", "value": 12.551733293998016}, {"type": "nauc_map_at_1_diff1", "value": 10.304469275234254}, {"type": "nauc_map_at_1_max", "value": -6.5207603714088656}, {"type": "nauc_map_at_1_std", "value": 11.511920725984798}, {"type": "nauc_map_at_20_diff1", "value": 8.392360813027413}, {"type": "nauc_map_at_20_max", "value": -0.5772326965536835}, {"type": "nauc_map_at_20_std", "value": 13.095295660026284}, {"type": "nauc_map_at_3_diff1", "value": 7.589039911404434}, {"type": "nauc_map_at_3_max", "value": -1.8859866214763326}, {"type": "nauc_map_at_3_std", "value": 10.864914799543438}, {"type": "nauc_map_at_5_diff1", "value": 7.725834671182779}, {"type": "nauc_map_at_5_max", "value": -2.0421876627364473}, {"type": "nauc_map_at_5_std", "value": 11.53252283204264}, {"type": "nauc_mrr_at_1000_diff1", "value": 7.223924047392896}, {"type": "nauc_mrr_at_1000_max", "value": -0.8237565406615028}, {"type": "nauc_mrr_at_1000_std", "value": 12.970720666995705}, {"type": "nauc_mrr_at_100_diff1", "value": 7.255600764760114}, {"type": "nauc_mrr_at_100_max", "value": -0.8075278980068191}, {"type": "nauc_mrr_at_100_std", "value": 13.021245058986777}, {"type": "nauc_mrr_at_10_diff1", "value": 6.824168026586408}, {"type": "nauc_mrr_at_10_max", "value": -1.1545141075883187}, {"type": "nauc_mrr_at_10_std", "value": 12.092125297214492}, {"type": "nauc_mrr_at_1_diff1", "value": 8.38385763966794}, {"type": "nauc_mrr_at_1_max", "value": -5.693722977782808}, {"type": "nauc_mrr_at_1_std", "value": 10.330346403389063}, {"type": "nauc_mrr_at_20_diff1", "value": 7.116749792879911}, {"type": "nauc_mrr_at_20_max", "value": -0.7639025085615958}, {"type": "nauc_mrr_at_20_std", "value": 12.627125842400034}, {"type": "nauc_mrr_at_3_diff1", "value": 5.958470776046153}, {"type": "nauc_mrr_at_3_max", "value": -2.397322655713469}, {"type": "nauc_mrr_at_3_std", "value": 10.318678582593435}, {"type": "nauc_mrr_at_5_diff1", "value": 6.422851283076855}, {"type": "nauc_mrr_at_5_max", "value": -2.228505094486492}, {"type": "nauc_mrr_at_5_std", "value": 11.080240086741586}, {"type": "nauc_ndcg_at_1000_diff1", "value": 9.665675037862528}, {"type": "nauc_ndcg_at_1000_max", "value": 1.8164643463570864}, {"type": "nauc_ndcg_at_1000_std", "value": 17.273535340960105}, {"type": "nauc_ndcg_at_100_diff1", "value": 10.336717458605742}, {"type": "nauc_ndcg_at_100_max", "value": 2.201049622861128}, {"type": "nauc_ndcg_at_100_std", "value": 18.594513135944407}, {"type": "nauc_ndcg_at_10_diff1", "value": 8.580230032915912}, {"type": "nauc_ndcg_at_10_max", "value": 1.6184519973149472}, {"type": "nauc_ndcg_at_10_std", "value": 14.168601330751521}, {"type": "nauc_ndcg_at_1_diff1", "value": 10.304469275234254}, {"type": "nauc_ndcg_at_1_max", "value": -6.5207603714088656}, {"type": "nauc_ndcg_at_1_std", "value": 11.511920725984798}, {"type": "nauc_ndcg_at_20_diff1", "value": 9.452643320635774}, {"type": "nauc_ndcg_at_20_max", "value": 2.649675021632715}, {"type": "nauc_ndcg_at_20_std", "value": 15.848558428927983}, {"type": "nauc_ndcg_at_3_diff1", "value": 7.423680730820109}, {"type": "nauc_ndcg_at_3_max", "value": -0.5241914531542782}, {"type": "nauc_ndcg_at_3_std", "value": 10.79696943710403}, {"type": "nauc_ndcg_at_5_diff1", "value": 7.595280413445214}, {"type": "nauc_ndcg_at_5_max", "value": -0.9084662101000812}, {"type": "nauc_ndcg_at_5_std", "value": 11.89171024983937}, {"type": "nauc_precision_at_1000_diff1", "value": 17.671568881686063}, {"type": "nauc_precision_at_1000_max", "value": 15.396853331313713}, {"type": "nauc_precision_at_1000_std", "value": 51.45090306802372}, {"type": "nauc_precision_at_100_diff1", "value": 18.340171384916356}, {"type": "nauc_precision_at_100_max", "value": 10.545554043869352}, {"type": "nauc_precision_at_100_std", "value": 41.71442317028242}, {"type": "nauc_precision_at_10_diff1", "value": 10.046825528020882}, {"type": "nauc_precision_at_10_max", "value": 7.8956170776495584}, {"type": "nauc_precision_at_10_std", "value": 18.532526447633877}, {"type": "nauc_precision_at_1_diff1", "value": 10.304469275234254}, {"type": "nauc_precision_at_1_max", "value": -6.5207603714088656}, {"type": "nauc_precision_at_1_std", "value": 11.511920725984798}, {"type": "nauc_precision_at_20_diff1", "value": 12.951545972608155}, {"type": "nauc_precision_at_20_max", "value": 11.389982355850425}, {"type": "nauc_precision_at_20_std", "value": 24.00835254089037}, {"type": "nauc_precision_at_3_diff1", "value": 7.169726395090002}, {"type": "nauc_precision_at_3_max", "value": 2.6355879106577915}, {"type": "nauc_precision_at_3_std", "value": 10.664371283765304}, {"type": "nauc_precision_at_5_diff1", "value": 7.40977816055324}, {"type": "nauc_precision_at_5_max", "value": 1.5419005218408786}, {"type": "nauc_precision_at_5_std", "value": 12.808767406726606}, {"type": "nauc_recall_at_1000_diff1", "value": 17.67156888168616}, {"type": "nauc_recall_at_1000_max", "value": 15.396853331313737}, {"type": "nauc_recall_at_1000_std", "value": 51.450903068023635}, {"type": "nauc_recall_at_100_diff1", "value": 18.340171384916317}, {"type": "nauc_recall_at_100_max", "value": 10.545554043869341}, {"type": "nauc_recall_at_100_std", "value": 41.714423170282394}, {"type": "nauc_recall_at_10_diff1", "value": 10.046825528020864}, {"type": "nauc_recall_at_10_max", "value": 7.895617077649546}, {"type": "nauc_recall_at_10_std", "value": 18.532526447633852}, {"type": "nauc_recall_at_1_diff1", "value": 10.304469275234254}, {"type": "nauc_recall_at_1_max", "value": -6.5207603714088656}, {"type": "nauc_recall_at_1_std", "value": 11.511920725984798}, {"type": "nauc_recall_at_20_diff1", "value": 12.951545972608173}, {"type": "nauc_recall_at_20_max", "value": 11.389982355850462}, {"type": "nauc_recall_at_20_std", "value": 24.00835254089037}, {"type": "nauc_recall_at_3_diff1", "value": 7.169726395090035}, {"type": "nauc_recall_at_3_max", "value": 2.63558791065783}, {"type": "nauc_recall_at_3_std", "value": 10.664371283765313}, {"type": "nauc_recall_at_5_diff1", "value": 7.409778160553243}, {"type": "nauc_recall_at_5_max", "value": 1.5419005218408781}, {"type": "nauc_recall_at_5_std", "value": 12.808767406726599}, {"type": "ndcg_at_1", "value": 12.518}, {"type": "ndcg_at_10", "value": 27.472}, {"type": "ndcg_at_100", "value": 32.690000000000005}, {"type": "ndcg_at_1000", "value": 35.168}, {"type": "ndcg_at_20", "value": 29.54}, {"type": "ndcg_at_3", "value": 21.560000000000002}, {"type": "ndcg_at_5", "value": 24.415}, {"type": "precision_at_1", "value": 12.518}, {"type": "precision_at_10", "value": 4.459}, {"type": "precision_at_100", "value": 0.698}, {"type": "precision_at_1000", "value": 0.09}, {"type": "precision_at_20", "value": 2.635}, {"type": "precision_at_3", "value": 9.411999999999999}, {"type": "precision_at_5", "value": 7.041}, {"type": "recall_at_1", "value": 12.518}, {"type": "recall_at_10", "value": 44.595}, {"type": "recall_at_100", "value": 69.844}, {"type": "recall_at_1000", "value": 90.04299999999999}, {"type": "recall_at_20", "value": 52.703}, {"type": "recall_at_3", "value": 28.236}, {"type": "recall_at_5", "value": 35.205999999999996}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P (default)", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "main_score", "value": 30.11004370017134}, {"type": "v_measure", "value": 30.11004370017134}, {"type": "v_measure_std", "value": 14.335180861208965}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S (default)", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "main_score", "value": 19.69451845436496}, {"type": "v_measure", "value": 19.69451845436496}, {"type": "v_measure_std", "value": 15.444158883670541}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions (default)", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "main_score", "value": 51.39726079096234}, {"type": "map", "value": 51.39726079096234}, {"type": "mrr", "value": 64.94514795761333}, {"type": "nAUC_map_diff1", "value": 13.516398333452804}, {"type": "nAUC_map_max", "value": 14.194223722139968}, {"type": "nAUC_map_std", "value": 7.1226539793825925}, {"type": "nAUC_mrr_diff1", "value": 15.629882497094707}, {"type": "nAUC_mrr_max", "value": 19.965579042518318}, {"type": "nAUC_mrr_std", "value": 13.128556325737211}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES (default)", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cosine_pearson", "value": 72.9724950588563}, {"type": "cosine_spearman", "value": 73.9095154037482}, {"type": "euclidean_pearson", "value": 51.29126269915467}, {"type": "euclidean_spearman", "value": 53.62953523835351}, {"type": "main_score", "value": 73.9095154037482}, {"type": "manhattan_pearson", "value": 47.93589517727305}, {"type": "manhattan_spearman", "value": 50.323435810249705}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification (default)", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 71.9448051948052}, {"type": "f1", "value": 72.03993637071432}, {"type": "f1_weighted", "value": 72.03993637071433}, {"type": "main_score", "value": 71.9448051948052}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P (default)", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "main_score", "value": 26.10044290663735}, {"type": "v_measure", "value": 26.10044290663735}, {"type": "v_measure_std", "value": 0.4850250523953905}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S (default)", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "main_score", "value": 13.37602244060575}, {"type": "v_measure", "value": 13.37602244060575}, {"type": "v_measure_std", "value": 0.6130095640974286}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval (default)", "type": "mteb/cqadupstack-android", "config": "default", "split": "test", "revision": "f46a197baaae43b4f621051089b82a364682dfeb"}, "metrics": [{"type": "main_score", "value": 29.395}, {"type": "map_at_1", "value": 18.637}, {"type": "map_at_10", "value": 25.094}, {"type": "map_at_100", "value": 26.115}, {"type": "map_at_1000", "value": 26.259}, {"type": "map_at_20", "value": 25.594}, {"type": "map_at_3", "value": 23.058}, {"type": "map_at_5", "value": 24.035}, {"type": "mrr_at_1", "value": 23.74821173104435}, {"type": "mrr_at_10", "value": 30.000681245316418}, {"type": "mrr_at_100", "value": 30.8270542356755}, {"type": "mrr_at_1000", "value": 30.90779908348725}, {"type": "mrr_at_20", "value": 30.411377002067137}, {"type": "mrr_at_3", "value": 28.30233667143539}, {"type": "mrr_at_5", "value": 29.089175011921792}, {"type": "nauc_map_at_1000_diff1", "value": 45.44985213475472}, {"type": "nauc_map_at_1000_max", "value": 31.85554173140791}, {"type": "nauc_map_at_1000_std", "value": -11.81610624669214}, {"type": "nauc_map_at_100_diff1", "value": 45.43648978317603}, {"type": "nauc_map_at_100_max", "value": 31.848529310376644}, {"type": "nauc_map_at_100_std", "value": -11.856130231905329}, {"type": "nauc_map_at_10_diff1", "value": 45.58637687075205}, {"type": "nauc_map_at_10_max", "value": 31.503326631977064}, {"type": "nauc_map_at_10_std", "value": -12.265935940319157}, {"type": "nauc_map_at_1_diff1", "value": 52.119568023926696}, {"type": "nauc_map_at_1_max", "value": 32.08497643741619}, {"type": "nauc_map_at_1_std", "value": -13.283778697596544}, {"type": "nauc_map_at_20_diff1", "value": 45.577938396658155}, {"type": "nauc_map_at_20_max", "value": 31.7637040046772}, {"type": "nauc_map_at_20_std", "value": -12.020408001583228}, {"type": "nauc_map_at_3_diff1", "value": 46.75147654852644}, {"type": "nauc_map_at_3_max", "value": 31.500448028398864}, {"type": "nauc_map_at_3_std", "value": -13.393665476201624}, {"type": "nauc_map_at_5_diff1", "value": 46.36664145078783}, {"type": "nauc_map_at_5_max", "value": 31.343431814823397}, {"type": "nauc_map_at_5_std", "value": -12.865715523230763}, {"type": "nauc_mrr_at_1000_diff1", "value": 45.14925372989887}, {"type": "nauc_mrr_at_1000_max", "value": 32.76618097106967}, {"type": "nauc_mrr_at_1000_std", "value": -12.524563118199339}, {"type": "nauc_mrr_at_100_diff1", "value": 45.13373715931408}, {"type": "nauc_mrr_at_100_max", "value": 32.77708930663885}, {"type": "nauc_mrr_at_100_std", "value": -12.56931435530875}, {"type": "nauc_mrr_at_10_diff1", "value": 45.40838719763305}, {"type": "nauc_mrr_at_10_max", "value": 32.762868559810784}, {"type": "nauc_mrr_at_10_std", "value": -12.817907206821655}, {"type": "nauc_mrr_at_1_diff1", "value": 50.89134258279399}, {"type": "nauc_mrr_at_1_max", "value": 34.37095750680418}, {"type": "nauc_mrr_at_1_std", "value": -14.274479412853886}, {"type": "nauc_mrr_at_20_diff1", "value": 45.249541891203215}, {"type": "nauc_mrr_at_20_max", "value": 32.82316951160751}, {"type": "nauc_mrr_at_20_std", "value": -12.61002466497651}, {"type": "nauc_mrr_at_3_diff1", "value": 46.08602618931601}, {"type": "nauc_mrr_at_3_max", "value": 32.941253888093804}, {"type": "nauc_mrr_at_3_std", "value": -13.563733488369248}, {"type": "nauc_mrr_at_5_diff1", "value": 45.95778955086399}, {"type": "nauc_mrr_at_5_max", "value": 32.89757778678747}, {"type": "nauc_mrr_at_5_std", "value": -13.388699347312574}, {"type": "nauc_ndcg_at_1000_diff1", "value": 42.45552310071413}, {"type": "nauc_ndcg_at_1000_max", "value": 31.851447169128853}, {"type": "nauc_ndcg_at_1000_std", "value": -9.157899679842178}, {"type": "nauc_ndcg_at_100_diff1", "value": 42.13189912941783}, {"type": "nauc_ndcg_at_100_max", "value": 31.920803739157755}, {"type": "nauc_ndcg_at_100_std", "value": -10.133311348487833}, {"type": "nauc_ndcg_at_10_diff1", "value": 43.14532027005519}, {"type": "nauc_ndcg_at_10_max", "value": 31.398484315040182}, {"type": "nauc_ndcg_at_10_std", "value": -11.65740071892807}, {"type": "nauc_ndcg_at_1_diff1", "value": 50.89134258279399}, {"type": "nauc_ndcg_at_1_max", "value": 34.37095750680418}, {"type": "nauc_ndcg_at_1_std", "value": -14.274479412853886}, {"type": "nauc_ndcg_at_20_diff1", "value": 43.05955752222379}, {"type": "nauc_ndcg_at_20_max", "value": 31.617775415149495}, {"type": "nauc_ndcg_at_20_std", "value": -10.72382272385622}, {"type": "nauc_ndcg_at_3_diff1", "value": 44.345319690154334}, {"type": "nauc_ndcg_at_3_max", "value": 31.860657201237984}, {"type": "nauc_ndcg_at_3_std", "value": -13.201036742073732}, {"type": "nauc_ndcg_at_5_diff1", "value": 44.2321922039918}, {"type": "nauc_ndcg_at_5_max", "value": 31.67328744227065}, {"type": "nauc_ndcg_at_5_std", "value": -12.73240065162892}, {"type": "nauc_precision_at_1000_diff1", "value": 7.273423259712273}, {"type": "nauc_precision_at_1000_max", "value": 6.764651727683099}, {"type": "nauc_precision_at_1000_std", "value": -1.4966884360432018}, {"type": "nauc_precision_at_100_diff1", "value": 14.882288606927712}, {"type": "nauc_precision_at_100_max", "value": 21.077880381393772}, {"type": "nauc_precision_at_100_std", "value": -3.2549759401079776}, {"type": "nauc_precision_at_10_diff1", "value": 28.60830720280523}, {"type": "nauc_precision_at_10_max", "value": 28.558880836815003}, {"type": "nauc_precision_at_10_std", "value": -7.163122385852441}, {"type": "nauc_precision_at_1_diff1", "value": 50.89134258279399}, {"type": "nauc_precision_at_1_max", "value": 34.37095750680418}, {"type": "nauc_precision_at_1_std", "value": -14.274479412853886}, {"type": "nauc_precision_at_20_diff1", "value": 24.08438528220202}, {"type": "nauc_precision_at_20_max", "value": 28.258801616588247}, {"type": "nauc_precision_at_20_std", "value": -6.705830110580177}, {"type": "nauc_precision_at_3_diff1", "value": 37.93456250117405}, {"type": "nauc_precision_at_3_max", "value": 31.243409463132032}, {"type": "nauc_precision_at_3_std", "value": -12.59868434526981}, {"type": "nauc_precision_at_5_diff1", "value": 34.729490110300425}, {"type": "nauc_precision_at_5_max", "value": 30.372494703283788}, {"type": "nauc_precision_at_5_std", "value": -10.069026416856131}, {"type": "nauc_recall_at_1000_diff1", "value": 23.676039981996997}, {"type": "nauc_recall_at_1000_max", "value": 26.693584681473453}, {"type": "nauc_recall_at_1000_std", "value": 11.941818004042663}, {"type": "nauc_recall_at_100_diff1", "value": 26.974116632964495}, {"type": "nauc_recall_at_100_max", "value": 28.1322789539008}, {"type": "nauc_recall_at_100_std", "value": -2.793517857097065}, {"type": "nauc_recall_at_10_diff1", "value": 34.380731984563155}, {"type": "nauc_recall_at_10_max", "value": 27.1153265513231}, {"type": "nauc_recall_at_10_std", "value": -8.019251840545442}, {"type": "nauc_recall_at_1_diff1", "value": 52.119568023926696}, {"type": "nauc_recall_at_1_max", "value": 32.08497643741619}, {"type": "nauc_recall_at_1_std", "value": -13.283778697596544}, {"type": "nauc_recall_at_20_diff1", "value": 33.11437933898011}, {"type": "nauc_recall_at_20_max", "value": 27.550021643829588}, {"type": "nauc_recall_at_20_std", "value": -4.660461025976219}, {"type": "nauc_recall_at_3_diff1", "value": 39.80493501345255}, {"type": "nauc_recall_at_3_max", "value": 28.954772937395923}, {"type": "nauc_recall_at_3_std", "value": -12.62754725500984}, {"type": "nauc_recall_at_5_diff1", "value": 38.809559633465454}, {"type": "nauc_recall_at_5_max", "value": 28.024304327517513}, {"type": "nauc_recall_at_5_std", "value": -11.285144166535767}, {"type": "ndcg_at_1", "value": 23.748}, {"type": "ndcg_at_10", "value": 29.395}, {"type": "ndcg_at_100", "value": 34.314}, {"type": "ndcg_at_1000", "value": 37.422}, {"type": "ndcg_at_20", "value": 30.94}, {"type": "ndcg_at_3", "value": 26.317}, {"type": "ndcg_at_5", "value": 27.331}, {"type": "precision_at_1", "value": 23.748}, {"type": "precision_at_10", "value": 5.680000000000001}, {"type": "precision_at_100", "value": 1.027}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_20", "value": 3.4189999999999996}, {"type": "precision_at_3", "value": 12.637}, {"type": "precision_at_5", "value": 9.013}, {"type": "recall_at_1", "value": 18.637}, {"type": "recall_at_10", "value": 37.092000000000006}, {"type": "recall_at_100", "value": 59.556}, {"type": "recall_at_1000", "value": 80.739}, {"type": "recall_at_20", "value": 42.971}, {"type": "recall_at_3", "value": 27.276}, {"type": "recall_at_5", "value": 30.469}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackEnglishRetrieval (default)", "type": "mteb/cqadupstack-english", "config": "default", "split": "test", "revision": "ad9991cb51e31e31e430383c75ffb2885547b5f0"}, "metrics": [{"type": "main_score", "value": 23.681}, {"type": "map_at_1", "value": 15.042}, {"type": "map_at_10", "value": 20.141000000000002}, {"type": "map_at_100", "value": 20.904}, {"type": "map_at_1000", "value": 21.023}, {"type": "map_at_20", "value": 20.523}, {"type": "map_at_3", "value": 18.482000000000003}, {"type": "map_at_5", "value": 19.345000000000002}, {"type": "mrr_at_1", "value": 19.23566878980892}, {"type": "mrr_at_10", "value": 24.451824891315347}, {"type": "mrr_at_100", "value": 25.134838805782923}, {"type": "mrr_at_1000", "value": 25.208900352642388}, {"type": "mrr_at_20", "value": 24.808984561765023}, {"type": "mrr_at_3", "value": 22.728237791932056}, {"type": "mrr_at_5", "value": 23.654989384288733}, {"type": "nauc_map_at_1000_diff1", "value": 40.798124812569384}, {"type": "nauc_map_at_1000_max", "value": 14.81540926855082}, {"type": "nauc_map_at_1000_std", "value": -5.05385945021678}, {"type": "nauc_map_at_100_diff1", "value": 40.829386744060805}, {"type": "nauc_map_at_100_max", "value": 14.774380892904473}, {"type": "nauc_map_at_100_std", "value": -5.117516079985261}, {"type": "nauc_map_at_10_diff1", "value": 41.15478642914214}, {"type": "nauc_map_at_10_max", "value": 15.03768443891208}, {"type": "nauc_map_at_10_std", "value": -5.641265905734448}, {"type": "nauc_map_at_1_diff1", "value": 47.54352356674396}, {"type": "nauc_map_at_1_max", "value": 15.273312981192003}, {"type": "nauc_map_at_1_std", "value": -6.169908596349911}, {"type": "nauc_map_at_20_diff1", "value": 40.97327855637148}, {"type": "nauc_map_at_20_max", "value": 14.89105826755982}, {"type": "nauc_map_at_20_std", "value": -5.364240018336858}, {"type": "nauc_map_at_3_diff1", "value": 42.231873075275615}, {"type": "nauc_map_at_3_max", "value": 15.404317252294913}, {"type": "nauc_map_at_3_std", "value": -5.678580756730022}, {"type": "nauc_map_at_5_diff1", "value": 41.66143830829381}, {"type": "nauc_map_at_5_max", "value": 15.447251046070571}, {"type": "nauc_map_at_5_std", "value": -5.837495650656335}, {"type": "nauc_mrr_at_1000_diff1", "value": 39.86420906999884}, {"type": "nauc_mrr_at_1000_max", "value": 16.135989158186753}, {"type": "nauc_mrr_at_1000_std", "value": -4.592451604982568}, {"type": "nauc_mrr_at_100_diff1", "value": 39.86122888458937}, {"type": "nauc_mrr_at_100_max", "value": 16.105439789753422}, {"type": "nauc_mrr_at_100_std", "value": -4.601044925036893}, {"type": "nauc_mrr_at_10_diff1", "value": 40.0914830828018}, {"type": "nauc_mrr_at_10_max", "value": 16.316069710505907}, {"type": "nauc_mrr_at_10_std", "value": -4.933931119120412}, {"type": "nauc_mrr_at_1_diff1", "value": 45.872319574398595}, {"type": "nauc_mrr_at_1_max", "value": 17.714407532873587}, {"type": "nauc_mrr_at_1_std", "value": -5.892428732338192}, {"type": "nauc_mrr_at_20_diff1", "value": 39.968104403603064}, {"type": "nauc_mrr_at_20_max", "value": 16.250579894010908}, {"type": "nauc_mrr_at_20_std", "value": -4.6913201222123115}, {"type": "nauc_mrr_at_3_diff1", "value": 40.98138119843196}, {"type": "nauc_mrr_at_3_max", "value": 16.753412976976964}, {"type": "nauc_mrr_at_3_std", "value": -4.862607910994618}, {"type": "nauc_mrr_at_5_diff1", "value": 40.51817434109358}, {"type": "nauc_mrr_at_5_max", "value": 16.669114474829712}, {"type": "nauc_mrr_at_5_std", "value": -5.0187913261619945}, {"type": "nauc_ndcg_at_1000_diff1", "value": 37.322805503060394}, {"type": "nauc_ndcg_at_1000_max", "value": 14.074508601767524}, {"type": "nauc_ndcg_at_1000_std", "value": -2.5684467253264294}, {"type": "nauc_ndcg_at_100_diff1", "value": 37.680833522451216}, {"type": "nauc_ndcg_at_100_max", "value": 13.218661114047158}, {"type": "nauc_ndcg_at_100_std", "value": -3.2872551022227774}, {"type": "nauc_ndcg_at_10_diff1", "value": 38.946324884525104}, {"type": "nauc_ndcg_at_10_max", "value": 14.56340596052078}, {"type": "nauc_ndcg_at_10_std", "value": -4.900816452861336}, {"type": "nauc_ndcg_at_1_diff1", "value": 45.872319574398595}, {"type": "nauc_ndcg_at_1_max", "value": 17.714407532873587}, {"type": "nauc_ndcg_at_1_std", "value": -5.892428732338192}, {"type": "nauc_ndcg_at_20_diff1", "value": 38.43824761822619}, {"type": "nauc_ndcg_at_20_max", "value": 14.1179521561548}, {"type": "nauc_ndcg_at_20_std", "value": -4.24942445066419}, {"type": "nauc_ndcg_at_3_diff1", "value": 40.14877067296726}, {"type": "nauc_ndcg_at_3_max", "value": 15.867529420424223}, {"type": "nauc_ndcg_at_3_std", "value": -4.932613633444065}, {"type": "nauc_ndcg_at_5_diff1", "value": 39.6102519927959}, {"type": "nauc_ndcg_at_5_max", "value": 15.609756851439455}, {"type": "nauc_ndcg_at_5_std", "value": -5.26940412982977}, {"type": "nauc_precision_at_1000_diff1", "value": -0.9954109220208948}, {"type": "nauc_precision_at_1000_max", "value": 11.967578992629335}, {"type": "nauc_precision_at_1000_std", "value": 9.014871288529228}, {"type": "nauc_precision_at_100_diff1", "value": 9.89964263137245}, {"type": "nauc_precision_at_100_max", "value": 9.908412889796272}, {"type": "nauc_precision_at_100_std", "value": 6.592828334609421}, {"type": "nauc_precision_at_10_diff1", "value": 24.28741469421518}, {"type": "nauc_precision_at_10_max", "value": 13.455460040232389}, {"type": "nauc_precision_at_10_std", "value": -2.3085437386023773}, {"type": "nauc_precision_at_1_diff1", "value": 45.872319574398595}, {"type": "nauc_precision_at_1_max", "value": 17.714407532873587}, {"type": "nauc_precision_at_1_std", "value": -5.892428732338192}, {"type": "nauc_precision_at_20_diff1", "value": 19.81016945673257}, {"type": "nauc_precision_at_20_max", "value": 13.617095525972758}, {"type": "nauc_precision_at_20_std", "value": 0.8956778782497932}, {"type": "nauc_precision_at_3_diff1", "value": 31.47336887855281}, {"type": "nauc_precision_at_3_max", "value": 17.33370290746675}, {"type": "nauc_precision_at_3_std", "value": -3.1323661307841824}, {"type": "nauc_precision_at_5_diff1", "value": 28.487140523674654}, {"type": "nauc_precision_at_5_max", "value": 16.480475549147176}, {"type": "nauc_precision_at_5_std", "value": -3.725675220452465}, {"type": "nauc_recall_at_1000_diff1", "value": 24.064977596273092}, {"type": "nauc_recall_at_1000_max", "value": 9.481115572308768}, {"type": "nauc_recall_at_1000_std", "value": 7.1196676914786305}, {"type": "nauc_recall_at_100_diff1", "value": 27.68996032837215}, {"type": "nauc_recall_at_100_max", "value": 5.569519308774954}, {"type": "nauc_recall_at_100_std", "value": 2.174562988626623}, {"type": "nauc_recall_at_10_diff1", "value": 33.4166528457575}, {"type": "nauc_recall_at_10_max", "value": 11.526480134166073}, {"type": "nauc_recall_at_10_std", "value": -3.9714508194727993}, {"type": "nauc_recall_at_1_diff1", "value": 47.54352356674396}, {"type": "nauc_recall_at_1_max", "value": 15.273312981192003}, {"type": "nauc_recall_at_1_std", "value": -6.169908596349911}, {"type": "nauc_recall_at_20_diff1", "value": 31.174108795272655}, {"type": "nauc_recall_at_20_max", "value": 9.49403140642074}, {"type": "nauc_recall_at_20_std", "value": -1.7053654233265276}, {"type": "nauc_recall_at_3_diff1", "value": 36.975946663308655}, {"type": "nauc_recall_at_3_max", "value": 13.846841332248397}, {"type": "nauc_recall_at_3_std", "value": -4.620179845226721}, {"type": "nauc_recall_at_5_diff1", "value": 35.32921422655988}, {"type": "nauc_recall_at_5_max", "value": 13.64989734279998}, {"type": "nauc_recall_at_5_std", "value": -5.11567851944459}, {"type": "ndcg_at_1", "value": 19.236}, {"type": "ndcg_at_10", "value": 23.681}, {"type": "ndcg_at_100", "value": 27.378000000000004}, {"type": "ndcg_at_1000", "value": 30.263}, {"type": "ndcg_at_20", "value": 24.869}, {"type": "ndcg_at_3", "value": 20.990000000000002}, {"type": "ndcg_at_5", "value": 22.112000000000002}, {"type": "precision_at_1", "value": 19.236}, {"type": "precision_at_10", "value": 4.561}, {"type": "precision_at_100", "value": 0.8130000000000001}, {"type": "precision_at_1000", "value": 0.131}, {"type": "precision_at_20", "value": 2.7390000000000003}, {"type": "precision_at_3", "value": 10.318}, {"type": "precision_at_5", "value": 7.35}, {"type": "recall_at_1", "value": 15.042}, {"type": "recall_at_10", "value": 29.768}, {"type": "recall_at_100", "value": 46.403}, {"type": "recall_at_1000", "value": 66.237}, {"type": "recall_at_20", "value": 34.172999999999995}, {"type": "recall_at_3", "value": 21.736}, {"type": "recall_at_5", "value": 24.909}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGamingRetrieval (default)", "type": "mteb/cqadupstack-gaming", "config": "default", "split": "test", "revision": "4885aa143210c98657558c04aaf3dc47cfb54340"}, "metrics": [{"type": "main_score", "value": 33.292}, {"type": "map_at_1", "value": 21.258}, {"type": "map_at_10", "value": 28.968}, {"type": "map_at_100", "value": 29.87}, {"type": "map_at_1000", "value": 29.967}, {"type": "map_at_20", "value": 29.432000000000002}, {"type": "map_at_3", "value": 26.772000000000002}, {"type": "map_at_5", "value": 28.101}, {"type": "mrr_at_1", "value": 24.890282131661444}, {"type": "mrr_at_10", "value": 31.909364581778334}, {"type": "mrr_at_100", "value": 32.725252862811274}, {"type": "mrr_at_1000", "value": 32.79159341343287}, {"type": "mrr_at_20", "value": 32.341802530734604}, {"type": "mrr_at_3", "value": 29.916405433646787}, {"type": "mrr_at_5", "value": 31.148380355276856}, {"type": "nauc_map_at_1000_diff1", "value": 42.89898485719857}, {"type": "nauc_map_at_1000_max", "value": 25.7601834300267}, {"type": "nauc_map_at_1000_std", "value": -6.8075678780363065}, {"type": "nauc_map_at_100_diff1", "value": 42.88303590625868}, {"type": "nauc_map_at_100_max", "value": 25.748000671003506}, {"type": "nauc_map_at_100_std", "value": -6.880066721152117}, {"type": "nauc_map_at_10_diff1", "value": 42.97229732654456}, {"type": "nauc_map_at_10_max", "value": 25.73618000005681}, {"type": "nauc_map_at_10_std", "value": -7.416149046327371}, {"type": "nauc_map_at_1_diff1", "value": 47.83647924362518}, {"type": "nauc_map_at_1_max", "value": 26.419122024307985}, {"type": "nauc_map_at_1_std", "value": -11.21630730855683}, {"type": "nauc_map_at_20_diff1", "value": 42.92765195082491}, {"type": "nauc_map_at_20_max", "value": 25.827247499669042}, {"type": "nauc_map_at_20_std", "value": -7.25577277560294}, {"type": "nauc_map_at_3_diff1", "value": 43.25354682000908}, {"type": "nauc_map_at_3_max", "value": 25.475868171430445}, {"type": "nauc_map_at_3_std", "value": -8.736090401561846}, {"type": "nauc_map_at_5_diff1", "value": 43.19691208993581}, {"type": "nauc_map_at_5_max", "value": 25.78026385201875}, {"type": "nauc_map_at_5_std", "value": -7.410757681862602}, {"type": "nauc_mrr_at_1000_diff1", "value": 43.75833402735505}, {"type": "nauc_mrr_at_1000_max", "value": 27.696252388661918}, {"type": "nauc_mrr_at_1000_std", "value": -5.26810595515753}, {"type": "nauc_mrr_at_100_diff1", "value": 43.74265948204468}, {"type": "nauc_mrr_at_100_max", "value": 27.68162942740836}, {"type": "nauc_mrr_at_100_std", "value": -5.273744266587032}, {"type": "nauc_mrr_at_10_diff1", "value": 43.856655689435364}, {"type": "nauc_mrr_at_10_max", "value": 27.850212260475832}, {"type": "nauc_mrr_at_10_std", "value": -5.651027150885109}, {"type": "nauc_mrr_at_1_diff1", "value": 49.482145902956546}, {"type": "nauc_mrr_at_1_max", "value": 29.40696837180673}, {"type": "nauc_mrr_at_1_std", "value": -9.246840389820699}, {"type": "nauc_mrr_at_20_diff1", "value": 43.773790918590606}, {"type": "nauc_mrr_at_20_max", "value": 27.813596603253572}, {"type": "nauc_mrr_at_20_std", "value": -5.563343410112547}, {"type": "nauc_mrr_at_3_diff1", "value": 44.1863992693496}, {"type": "nauc_mrr_at_3_max", "value": 27.975687791183194}, {"type": "nauc_mrr_at_3_std", "value": -6.566771188686054}, {"type": "nauc_mrr_at_5_diff1", "value": 44.28502525647762}, {"type": "nauc_mrr_at_5_max", "value": 28.22823260746294}, {"type": "nauc_mrr_at_5_std", "value": -5.664969849271516}, {"type": "nauc_ndcg_at_1000_diff1", "value": 41.10279297865673}, {"type": "nauc_ndcg_at_1000_max", "value": 25.15483651126361}, {"type": "nauc_ndcg_at_1000_std", "value": -2.326246701669577}, {"type": "nauc_ndcg_at_100_diff1", "value": 40.68733230153336}, {"type": "nauc_ndcg_at_100_max", "value": 24.920497030562395}, {"type": "nauc_ndcg_at_100_std", "value": -3.2491794009868062}, {"type": "nauc_ndcg_at_10_diff1", "value": 41.22720271830224}, {"type": "nauc_ndcg_at_10_max", "value": 25.591609324815213}, {"type": "nauc_ndcg_at_10_std", "value": -5.930931282520972}, {"type": "nauc_ndcg_at_1_diff1", "value": 49.482145902956546}, {"type": "nauc_ndcg_at_1_max", "value": 29.40696837180673}, {"type": "nauc_ndcg_at_1_std", "value": -9.246840389820699}, {"type": "nauc_ndcg_at_20_diff1", "value": 41.02978538915726}, {"type": "nauc_ndcg_at_20_max", "value": 25.692164466960982}, {"type": "nauc_ndcg_at_20_std", "value": -5.481258872610866}, {"type": "nauc_ndcg_at_3_diff1", "value": 41.946789824936715}, {"type": "nauc_ndcg_at_3_max", "value": 25.819165242311083}, {"type": "nauc_ndcg_at_3_std", "value": -7.651398955832938}, {"type": "nauc_ndcg_at_5_diff1", "value": 41.89260094688737}, {"type": "nauc_ndcg_at_5_max", "value": 26.092454786522957}, {"type": "nauc_ndcg_at_5_std", "value": -5.831834443535013}, {"type": "nauc_precision_at_1000_diff1", "value": 7.432542782735883}, {"type": "nauc_precision_at_1000_max", "value": 9.331553140370588}, {"type": "nauc_precision_at_1000_std", "value": 28.23885596670155}, {"type": "nauc_precision_at_100_diff1", "value": 19.6661654667304}, {"type": "nauc_precision_at_100_max", "value": 16.6847940189733}, {"type": "nauc_precision_at_100_std", "value": 18.310214580560057}, {"type": "nauc_precision_at_10_diff1", "value": 32.10200793695008}, {"type": "nauc_precision_at_10_max", "value": 23.864811590537947}, {"type": "nauc_precision_at_10_std", "value": 1.4030918024799062}, {"type": "nauc_precision_at_1_diff1", "value": 49.482145902956546}, {"type": "nauc_precision_at_1_max", "value": 29.40696837180673}, {"type": "nauc_precision_at_1_std", "value": -9.246840389820699}, {"type": "nauc_precision_at_20_diff1", "value": 29.476241810558673}, {"type": "nauc_precision_at_20_max", "value": 23.96668161723849}, {"type": "nauc_precision_at_20_std", "value": 4.306914916353381}, {"type": "nauc_precision_at_3_diff1", "value": 36.06776696045971}, {"type": "nauc_precision_at_3_max", "value": 25.929370510324745}, {"type": "nauc_precision_at_3_std", "value": -3.7615220021347517}, {"type": "nauc_precision_at_5_diff1", "value": 35.32396504605641}, {"type": "nauc_precision_at_5_max", "value": 25.95265820819126}, {"type": "nauc_precision_at_5_std", "value": 1.1670946217187153}, {"type": "nauc_recall_at_1000_diff1", "value": 29.164546397383145}, {"type": "nauc_recall_at_1000_max", "value": 15.621267941592098}, {"type": "nauc_recall_at_1000_std", "value": 26.27547002407044}, {"type": "nauc_recall_at_100_diff1", "value": 29.153994431881447}, {"type": "nauc_recall_at_100_max", "value": 16.748491583987608}, {"type": "nauc_recall_at_100_std", "value": 9.462267347861445}, {"type": "nauc_recall_at_10_diff1", "value": 34.05468080049927}, {"type": "nauc_recall_at_10_max", "value": 22.204610247322602}, {"type": "nauc_recall_at_10_std", "value": -3.5086309143508814}, {"type": "nauc_recall_at_1_diff1", "value": 47.83647924362518}, {"type": "nauc_recall_at_1_max", "value": 26.419122024307985}, {"type": "nauc_recall_at_1_std", "value": -11.21630730855683}, {"type": "nauc_recall_at_20_diff1", "value": 33.12835279154617}, {"type": "nauc_recall_at_20_max", "value": 22.306853620231067}, {"type": "nauc_recall_at_20_std", "value": -2.033052592471381}, {"type": "nauc_recall_at_3_diff1", "value": 36.95894401551376}, {"type": "nauc_recall_at_3_max", "value": 22.786504846733948}, {"type": "nauc_recall_at_3_std", "value": -6.979614609488201}, {"type": "nauc_recall_at_5_diff1", "value": 36.46425114286232}, {"type": "nauc_recall_at_5_max", "value": 23.920023442782707}, {"type": "nauc_recall_at_5_std", "value": -3.0154588250727543}, {"type": "ndcg_at_1", "value": 24.89}, {"type": "ndcg_at_10", "value": 33.292}, {"type": "ndcg_at_100", "value": 37.901}, {"type": "ndcg_at_1000", "value": 40.285}, {"type": "ndcg_at_20", "value": 34.884}, {"type": "ndcg_at_3", "value": 29.238999999999997}, {"type": "ndcg_at_5", "value": 31.367}, {"type": "precision_at_1", "value": 24.89}, {"type": "precision_at_10", "value": 5.442}, {"type": "precision_at_100", "value": 0.849}, {"type": "precision_at_1000", "value": 0.11399999999999999}, {"type": "precision_at_20", "value": 3.132}, {"type": "precision_at_3", "value": 13.312}, {"type": "precision_at_5", "value": 9.342}, {"type": "recall_at_1", "value": 21.258}, {"type": "recall_at_10", "value": 43.651}, {"type": "recall_at_100", "value": 64.885}, {"type": "recall_at_1000", "value": 82.248}, {"type": "recall_at_20", "value": 49.580999999999996}, {"type": "recall_at_3", "value": 32.625}, {"type": "recall_at_5", "value": 37.957}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGisRetrieval (default)", "type": "mteb/cqadupstack-gis", "config": "default", "split": "test", "revision": "5003b3064772da1887988e05400cf3806fe491f2"}, "metrics": [{"type": "main_score", "value": 16.07}, {"type": "map_at_1", "value": 9.415999999999999}, {"type": "map_at_10", "value": 13.513}, {"type": "map_at_100", "value": 14.224999999999998}, {"type": "map_at_1000", "value": 14.319}, {"type": "map_at_20", "value": 13.866}, {"type": "map_at_3", "value": 12.112}, {"type": "map_at_5", "value": 12.926000000000002}, {"type": "mrr_at_1", "value": 10.056497175141244}, {"type": "mrr_at_10", "value": 14.38808178638687}, {"type": "mrr_at_100", "value": 15.119805851668064}, {"type": "mrr_at_1000", "value": 15.205731909234233}, {"type": "mrr_at_20", "value": 14.75397466980612}, {"type": "mrr_at_3", "value": 12.862523540489635}, {"type": "mrr_at_5", "value": 13.77212806026365}, {"type": "nauc_map_at_1000_diff1", "value": 30.749479495702342}, {"type": "nauc_map_at_1000_max", "value": 18.061350456252757}, {"type": "nauc_map_at_1000_std", "value": -12.776716311222378}, {"type": "nauc_map_at_100_diff1", "value": 30.745239452411266}, {"type": "nauc_map_at_100_max", "value": 18.00287521387654}, {"type": "nauc_map_at_100_std", "value": -12.762249840519019}, {"type": "nauc_map_at_10_diff1", "value": 31.585120048362626}, {"type": "nauc_map_at_10_max", "value": 18.39732510168566}, {"type": "nauc_map_at_10_std", "value": -13.04298966234414}, {"type": "nauc_map_at_1_diff1", "value": 39.53220236575913}, {"type": "nauc_map_at_1_max", "value": 19.291078384369037}, {"type": "nauc_map_at_1_std", "value": -15.847700407312121}, {"type": "nauc_map_at_20_diff1", "value": 31.15292842056701}, {"type": "nauc_map_at_20_max", "value": 18.13662897968119}, {"type": "nauc_map_at_20_std", "value": -12.848240363122192}, {"type": "nauc_map_at_3_diff1", "value": 33.313193762120996}, {"type": "nauc_map_at_3_max", "value": 18.718856751497455}, {"type": "nauc_map_at_3_std", "value": -14.859757228365305}, {"type": "nauc_map_at_5_diff1", "value": 32.23145271963652}, {"type": "nauc_map_at_5_max", "value": 18.686858524820614}, {"type": "nauc_map_at_5_std", "value": -13.710819578206074}, {"type": "nauc_mrr_at_1000_diff1", "value": 29.8165900318886}, {"type": "nauc_mrr_at_1000_max", "value": 20.23811240329599}, {"type": "nauc_mrr_at_1000_std", "value": -11.894134707547828}, {"type": "nauc_mrr_at_100_diff1", "value": 29.79693245528083}, {"type": "nauc_mrr_at_100_max", "value": 20.20487363279151}, {"type": "nauc_mrr_at_100_std", "value": -11.8801471861184}, {"type": "nauc_mrr_at_10_diff1", "value": 30.555491785566787}, {"type": "nauc_mrr_at_10_max", "value": 20.727418041975238}, {"type": "nauc_mrr_at_10_std", "value": -12.17749828295938}, {"type": "nauc_mrr_at_1_diff1", "value": 38.471750784591066}, {"type": "nauc_mrr_at_1_max", "value": 21.693359914033035}, {"type": "nauc_mrr_at_1_std", "value": -15.027184530198495}, {"type": "nauc_mrr_at_20_diff1", "value": 30.124573695443253}, {"type": "nauc_mrr_at_20_max", "value": 20.387777693647998}, {"type": "nauc_mrr_at_20_std", "value": -11.986519353678883}, {"type": "nauc_mrr_at_3_diff1", "value": 31.924325436195495}, {"type": "nauc_mrr_at_3_max", "value": 20.617013722734008}, {"type": "nauc_mrr_at_3_std", "value": -14.110436011957422}, {"type": "nauc_mrr_at_5_diff1", "value": 30.993974966945082}, {"type": "nauc_mrr_at_5_max", "value": 20.986844373402263}, {"type": "nauc_mrr_at_5_std", "value": -12.69277901580161}, {"type": "nauc_ndcg_at_1000_diff1", "value": 25.982176878556317}, {"type": "nauc_ndcg_at_1000_max", "value": 17.957848463581367}, {"type": "nauc_ndcg_at_1000_std", "value": -10.478813728245443}, {"type": "nauc_ndcg_at_100_diff1", "value": 25.170121843912362}, {"type": "nauc_ndcg_at_100_max", "value": 16.255524144508325}, {"type": "nauc_ndcg_at_100_std", "value": -9.984533384788604}, {"type": "nauc_ndcg_at_10_diff1", "value": 28.577877265628548}, {"type": "nauc_ndcg_at_10_max", "value": 18.13117862235857}, {"type": "nauc_ndcg_at_10_std", "value": -10.906065025018682}, {"type": "nauc_ndcg_at_1_diff1", "value": 38.471750784591066}, {"type": "nauc_ndcg_at_1_max", "value": 21.693359914033035}, {"type": "nauc_ndcg_at_1_std", "value": -15.027184530198495}, {"type": "nauc_ndcg_at_20_diff1", "value": 27.10928770072782}, {"type": "nauc_ndcg_at_20_max", "value": 17.30763169934487}, {"type": "nauc_ndcg_at_20_std", "value": -10.399408092338273}, {"type": "nauc_ndcg_at_3_diff1", "value": 31.042482747608286}, {"type": "nauc_ndcg_at_3_max", "value": 18.738158681504135}, {"type": "nauc_ndcg_at_3_std", "value": -14.477327055073575}, {"type": "nauc_ndcg_at_5_diff1", "value": 29.660154138043147}, {"type": "nauc_ndcg_at_5_max", "value": 18.84211095319927}, {"type": "nauc_ndcg_at_5_std", "value": -12.327752711951133}, {"type": "nauc_precision_at_1000_diff1", "value": 7.633443912786396}, {"type": "nauc_precision_at_1000_max", "value": 21.566986560692477}, {"type": "nauc_precision_at_1000_std", "value": -2.445375271482855}, {"type": "nauc_precision_at_100_diff1", "value": 9.318121904657204}, {"type": "nauc_precision_at_100_max", "value": 13.380483592987227}, {"type": "nauc_precision_at_100_std", "value": -3.8250949950041795}, {"type": "nauc_precision_at_10_diff1", "value": 20.19648074695999}, {"type": "nauc_precision_at_10_max", "value": 18.698777956049263}, {"type": "nauc_precision_at_10_std", "value": -6.150147847173545}, {"type": "nauc_precision_at_1_diff1", "value": 38.471750784591066}, {"type": "nauc_precision_at_1_max", "value": 21.693359914033035}, {"type": "nauc_precision_at_1_std", "value": -15.027184530198495}, {"type": "nauc_precision_at_20_diff1", "value": 16.533311292140727}, {"type": "nauc_precision_at_20_max", "value": 17.695708265296}, {"type": "nauc_precision_at_20_std", "value": -4.705535015858615}, {"type": "nauc_precision_at_3_diff1", "value": 25.326780965328666}, {"type": "nauc_precision_at_3_max", "value": 19.89839754219193}, {"type": "nauc_precision_at_3_std", "value": -13.489806593621662}, {"type": "nauc_precision_at_5_diff1", "value": 22.651705939365204}, {"type": "nauc_precision_at_5_max", "value": 20.655007483997082}, {"type": "nauc_precision_at_5_std", "value": -9.011224009514967}, {"type": "nauc_recall_at_1000_diff1", "value": 15.004367482615095}, {"type": "nauc_recall_at_1000_max", "value": 17.748576991915314}, {"type": "nauc_recall_at_1000_std", "value": -6.336771887149544}, {"type": "nauc_recall_at_100_diff1", "value": 12.26225847741849}, {"type": "nauc_recall_at_100_max", "value": 8.881243122304054}, {"type": "nauc_recall_at_100_std", "value": -4.136516621641661}, {"type": "nauc_recall_at_10_diff1", "value": 22.427386838623846}, {"type": "nauc_recall_at_10_max", "value": 15.21049389571777}, {"type": "nauc_recall_at_10_std", "value": -6.30926628254321}, {"type": "nauc_recall_at_1_diff1", "value": 39.53220236575913}, {"type": "nauc_recall_at_1_max", "value": 19.291078384369037}, {"type": "nauc_recall_at_1_std", "value": -15.847700407312121}, {"type": "nauc_recall_at_20_diff1", "value": 18.637972178861908}, {"type": "nauc_recall_at_20_max", "value": 12.960062439294784}, {"type": "nauc_recall_at_20_std", "value": -5.432871665457346}, {"type": "nauc_recall_at_3_diff1", "value": 26.860762414942542}, {"type": "nauc_recall_at_3_max", "value": 17.111730042893747}, {"type": "nauc_recall_at_3_std", "value": -13.66463201462077}, {"type": "nauc_recall_at_5_diff1", "value": 24.99125073047622}, {"type": "nauc_recall_at_5_max", "value": 17.157076930941727}, {"type": "nauc_recall_at_5_std", "value": -9.709045620839477}, {"type": "ndcg_at_1", "value": 10.056}, {"type": "ndcg_at_10", "value": 16.07}, {"type": "ndcg_at_100", "value": 20.119999999999997}, {"type": "ndcg_at_1000", "value": 23.135}, {"type": "ndcg_at_20", "value": 17.379}, {"type": "ndcg_at_3", "value": 13.196}, {"type": "ndcg_at_5", "value": 14.667}, {"type": "precision_at_1", "value": 10.056}, {"type": "precision_at_10", "value": 2.621}, {"type": "precision_at_100", "value": 0.49500000000000005}, {"type": "precision_at_1000", "value": 0.08}, {"type": "precision_at_20", "value": 1.6049999999999998}, {"type": "precision_at_3", "value": 5.65}, {"type": "precision_at_5", "value": 4.226}, {"type": "recall_at_1", "value": 9.415999999999999}, {"type": "recall_at_10", "value": 23.146}, {"type": "recall_at_100", "value": 42.798}, {"type": "recall_at_1000", "value": 66.647}, {"type": "recall_at_20", "value": 28.222}, {"type": "recall_at_3", "value": 15.537}, {"type": "recall_at_5", "value": 18.971}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackMathematicaRetrieval (default)", "type": "mteb/cqadupstack-mathematica", "config": "default", "split": "test", "revision": "90fceea13679c63fe563ded68f3b6f06e50061de"}, "metrics": [{"type": "main_score", "value": 10.333}, {"type": "map_at_1", "value": 5.122}, {"type": "map_at_10", "value": 8.056000000000001}, {"type": "map_at_100", "value": 8.802}, {"type": "map_at_1000", "value": 8.912}, {"type": "map_at_20", "value": 8.415000000000001}, {"type": "map_at_3", "value": 7.045999999999999}, {"type": "map_at_5", "value": 7.504}, {"type": "mrr_at_1", "value": 6.467661691542288}, {"type": "mrr_at_10", "value": 9.997384111190083}, {"type": "mrr_at_100", "value": 10.780503968726906}, {"type": "mrr_at_1000", "value": 10.877815365669848}, {"type": "mrr_at_20", "value": 10.387299394522376}, {"type": "mrr_at_3", "value": 8.747927031509123}, {"type": "mrr_at_5", "value": 9.37603648424544}, {"type": "nauc_map_at_1000_diff1", "value": 16.627745082398647}, {"type": "nauc_map_at_1000_max", "value": 13.859196754512038}, {"type": "nauc_map_at_1000_std", "value": -2.0046507955951545}, {"type": "nauc_map_at_100_diff1", "value": 16.620272060480293}, {"type": "nauc_map_at_100_max", "value": 13.888542915508207}, {"type": "nauc_map_at_100_std", "value": -2.1508604539816405}, {"type": "nauc_map_at_10_diff1", "value": 16.54291042030997}, {"type": "nauc_map_at_10_max", "value": 14.679948762856155}, {"type": "nauc_map_at_10_std", "value": -2.0508176657469925}, {"type": "nauc_map_at_1_diff1", "value": 26.648517428464473}, {"type": "nauc_map_at_1_max", "value": 14.172118938664543}, {"type": "nauc_map_at_1_std", "value": -4.531793333515623}, {"type": "nauc_map_at_20_diff1", "value": 16.586117993573247}, {"type": "nauc_map_at_20_max", "value": 13.902879810509836}, {"type": "nauc_map_at_20_std", "value": -2.1637773579833284}, {"type": "nauc_map_at_3_diff1", "value": 17.292405890978245}, {"type": "nauc_map_at_3_max", "value": 14.88845860580791}, {"type": "nauc_map_at_3_std", "value": -3.8731525741198434}, {"type": "nauc_map_at_5_diff1", "value": 17.063873849249006}, {"type": "nauc_map_at_5_max", "value": 14.472842242085832}, {"type": "nauc_map_at_5_std", "value": -3.2215593846047637}, {"type": "nauc_mrr_at_1000_diff1", "value": 17.195672190983608}, {"type": "nauc_mrr_at_1000_max", "value": 16.342766844618215}, {"type": "nauc_mrr_at_1000_std", "value": -1.1235080643915678}, {"type": "nauc_mrr_at_100_diff1", "value": 17.139546677591238}, {"type": "nauc_mrr_at_100_max", "value": 16.346425503757565}, {"type": "nauc_mrr_at_100_std", "value": -1.2336496415510974}, {"type": "nauc_mrr_at_10_diff1", "value": 17.421668919941986}, {"type": "nauc_mrr_at_10_max", "value": 17.033376602230828}, {"type": "nauc_mrr_at_10_std", "value": -1.2493483044737175}, {"type": "nauc_mrr_at_1_diff1", "value": 26.65544099259078}, {"type": "nauc_mrr_at_1_max", "value": 17.10769821821117}, {"type": "nauc_mrr_at_1_std", "value": -2.72507465768404}, {"type": "nauc_mrr_at_20_diff1", "value": 17.123070882175753}, {"type": "nauc_mrr_at_20_max", "value": 16.290797946719834}, {"type": "nauc_mrr_at_20_std", "value": -1.0559190532852607}, {"type": "nauc_mrr_at_3_diff1", "value": 18.503311769244924}, {"type": "nauc_mrr_at_3_max", "value": 17.660736027174302}, {"type": "nauc_mrr_at_3_std", "value": -2.1922179141352234}, {"type": "nauc_mrr_at_5_diff1", "value": 17.87253349268872}, {"type": "nauc_mrr_at_5_max", "value": 17.29405417834218}, {"type": "nauc_mrr_at_5_std", "value": -2.276297588731558}, {"type": "nauc_ndcg_at_1000_diff1", "value": 14.450990987909975}, {"type": "nauc_ndcg_at_1000_max", "value": 12.61179895702807}, {"type": "nauc_ndcg_at_1000_std", "value": 2.1787457701847006}, {"type": "nauc_ndcg_at_100_diff1", "value": 13.868792706107108}, {"type": "nauc_ndcg_at_100_max", "value": 12.876251575225254}, {"type": "nauc_ndcg_at_100_std", "value": -0.9023302572828659}, {"type": "nauc_ndcg_at_10_diff1", "value": 14.18618751878955}, {"type": "nauc_ndcg_at_10_max", "value": 15.44002664591339}, {"type": "nauc_ndcg_at_10_std", "value": -0.2908150507923372}, {"type": "nauc_ndcg_at_1_diff1", "value": 26.65544099259078}, {"type": "nauc_ndcg_at_1_max", "value": 17.10769821821117}, {"type": "nauc_ndcg_at_1_std", "value": -2.72507465768404}, {"type": "nauc_ndcg_at_20_diff1", "value": 14.021582557942699}, {"type": "nauc_ndcg_at_20_max", "value": 12.843878363016215}, {"type": "nauc_ndcg_at_20_std", "value": -0.5317355206153845}, {"type": "nauc_ndcg_at_3_diff1", "value": 15.27030031763437}, {"type": "nauc_ndcg_at_3_max", "value": 16.442777903842174}, {"type": "nauc_ndcg_at_3_std", "value": -3.4853935802800864}, {"type": "nauc_ndcg_at_5_diff1", "value": 15.053308688870072}, {"type": "nauc_ndcg_at_5_max", "value": 15.493086436510678}, {"type": "nauc_ndcg_at_5_std", "value": -2.5841189511983695}, {"type": "nauc_precision_at_1000_diff1", "value": 5.162665834337446}, {"type": "nauc_precision_at_1000_max", "value": 5.426553384527509}, {"type": "nauc_precision_at_1000_std", "value": 6.1242440048302695}, {"type": "nauc_precision_at_100_diff1", "value": 5.240996534418689}, {"type": "nauc_precision_at_100_max", "value": 9.06975798955498}, {"type": "nauc_precision_at_100_std", "value": -2.961393279607517}, {"type": "nauc_precision_at_10_diff1", "value": 8.19432780347633}, {"type": "nauc_precision_at_10_max", "value": 16.033136985617734}, {"type": "nauc_precision_at_10_std", "value": 0.92060297716355}, {"type": "nauc_precision_at_1_diff1", "value": 26.65544099259078}, {"type": "nauc_precision_at_1_max", "value": 17.10769821821117}, {"type": "nauc_precision_at_1_std", "value": -2.72507465768404}, {"type": "nauc_precision_at_20_diff1", "value": 8.218392783839754}, {"type": "nauc_precision_at_20_max", "value": 9.279320896895346}, {"type": "nauc_precision_at_20_std", "value": 0.5719429607659788}, {"type": "nauc_precision_at_3_diff1", "value": 10.598049592179171}, {"type": "nauc_precision_at_3_max", "value": 18.292981072202778}, {"type": "nauc_precision_at_3_std", "value": -1.9747521095182612}, {"type": "nauc_precision_at_5_diff1", "value": 9.4592422188968}, {"type": "nauc_precision_at_5_max", "value": 16.820892184546253}, {"type": "nauc_precision_at_5_std", "value": -1.4503082963318303}, {"type": "nauc_recall_at_1000_diff1", "value": 11.42106802052846}, {"type": "nauc_recall_at_1000_max", "value": 7.7142629478343965}, {"type": "nauc_recall_at_1000_std", "value": 14.064107059885153}, {"type": "nauc_recall_at_100_diff1", "value": 9.533537910457907}, {"type": "nauc_recall_at_100_max", "value": 8.918433756778455}, {"type": "nauc_recall_at_100_std", "value": 0.6068026275245649}, {"type": "nauc_recall_at_10_diff1", "value": 9.410565718560424}, {"type": "nauc_recall_at_10_max", "value": 15.389790528147987}, {"type": "nauc_recall_at_10_std", "value": 2.911492221412525}, {"type": "nauc_recall_at_1_diff1", "value": 26.648517428464473}, {"type": "nauc_recall_at_1_max", "value": 14.172118938664543}, {"type": "nauc_recall_at_1_std", "value": -4.531793333515623}, {"type": "nauc_recall_at_20_diff1", "value": 9.507727153647583}, {"type": "nauc_recall_at_20_max", "value": 8.659458970332985}, {"type": "nauc_recall_at_20_std", "value": 1.564558976763232}, {"type": "nauc_recall_at_3_diff1", "value": 9.976406177297271}, {"type": "nauc_recall_at_3_max", "value": 16.56979232924191}, {"type": "nauc_recall_at_3_std", "value": -3.204552187951311}, {"type": "nauc_recall_at_5_diff1", "value": 10.283335368188732}, {"type": "nauc_recall_at_5_max", "value": 14.869143869085146}, {"type": "nauc_recall_at_5_std", "value": -1.3854541602405859}, {"type": "ndcg_at_1", "value": 6.468}, {"type": "ndcg_at_10", "value": 10.333}, {"type": "ndcg_at_100", "value": 14.437}, {"type": "ndcg_at_1000", "value": 17.7}, {"type": "ndcg_at_20", "value": 11.641}, {"type": "ndcg_at_3", "value": 8.222999999999999}, {"type": "ndcg_at_5", "value": 9.030000000000001}, {"type": "precision_at_1", "value": 6.468}, {"type": "precision_at_10", "value": 2.0650000000000004}, {"type": "precision_at_100", "value": 0.485}, {"type": "precision_at_1000", "value": 0.08800000000000001}, {"type": "precision_at_20", "value": 1.374}, {"type": "precision_at_3", "value": 4.063}, {"type": "precision_at_5", "value": 3.0349999999999997}, {"type": "recall_at_1", "value": 5.122}, {"type": "recall_at_10", "value": 15.494}, {"type": "recall_at_100", "value": 34.224}, {"type": "recall_at_1000", "value": 58.475}, {"type": "recall_at_20", "value": 20.281}, {"type": "recall_at_3", "value": 9.751999999999999}, {"type": "recall_at_5", "value": 11.654}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackPhysicsRetrieval (default)", "type": "mteb/cqadupstack-physics", "config": "default", "split": "test", "revision": "79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4"}, "metrics": [{"type": "main_score", "value": 22.541}, {"type": "map_at_1", "value": 13.925}, {"type": "map_at_10", "value": 18.919}, {"type": "map_at_100", "value": 19.986}, {"type": "map_at_1000", "value": 20.122999999999998}, {"type": "map_at_20", "value": 19.454}, {"type": "map_at_3", "value": 17.128}, {"type": "map_at_5", "value": 18.203}, {"type": "mrr_at_1", "value": 16.93936477382098}, {"type": "mrr_at_10", "value": 22.677177383625892}, {"type": "mrr_at_100", "value": 23.604708246998403}, {"type": "mrr_at_1000", "value": 23.68613779725607}, {"type": "mrr_at_20", "value": 23.153477073193283}, {"type": "mrr_at_3", "value": 20.7571382739814}, {"type": "mrr_at_5", "value": 21.916907282643578}, {"type": "nauc_map_at_1000_diff1", "value": 39.76772856309066}, {"type": "nauc_map_at_1000_max", "value": 22.353115497562158}, {"type": "nauc_map_at_1000_std", "value": 0.3117135171829511}, {"type": "nauc_map_at_100_diff1", "value": 39.78846396189273}, {"type": "nauc_map_at_100_max", "value": 22.363077131125365}, {"type": "nauc_map_at_100_std", "value": 0.2284514348299411}, {"type": "nauc_map_at_10_diff1", "value": 39.81391249750955}, {"type": "nauc_map_at_10_max", "value": 22.175966030251622}, {"type": "nauc_map_at_10_std", "value": -0.44362610335129193}, {"type": "nauc_map_at_1_diff1", "value": 49.32991220296194}, {"type": "nauc_map_at_1_max", "value": 24.83395680944923}, {"type": "nauc_map_at_1_std", "value": -0.7479527140782966}, {"type": "nauc_map_at_20_diff1", "value": 39.88873226053775}, {"type": "nauc_map_at_20_max", "value": 22.284944795016763}, {"type": "nauc_map_at_20_std", "value": -0.1297029523950583}, {"type": "nauc_map_at_3_diff1", "value": 40.94117243505588}, {"type": "nauc_map_at_3_max", "value": 23.178606683652237}, {"type": "nauc_map_at_3_std", "value": -0.9328230609603833}, {"type": "nauc_map_at_5_diff1", "value": 39.70960944345954}, {"type": "nauc_map_at_5_max", "value": 22.400765269020813}, {"type": "nauc_map_at_5_std", "value": -0.4493564812963111}, {"type": "nauc_mrr_at_1000_diff1", "value": 38.09736089241541}, {"type": "nauc_mrr_at_1000_max", "value": 24.95778301028415}, {"type": "nauc_mrr_at_1000_std", "value": 2.1983425445724563}, {"type": "nauc_mrr_at_100_diff1", "value": 38.07672381248107}, {"type": "nauc_mrr_at_100_max", "value": 24.974899996866757}, {"type": "nauc_mrr_at_100_std", "value": 2.1882636690518256}, {"type": "nauc_mrr_at_10_diff1", "value": 38.031417501129106}, {"type": "nauc_mrr_at_10_max", "value": 25.02204246091702}, {"type": "nauc_mrr_at_10_std", "value": 1.7073869104185317}, {"type": "nauc_mrr_at_1_diff1", "value": 48.15437534861672}, {"type": "nauc_mrr_at_1_max", "value": 28.63543344473674}, {"type": "nauc_mrr_at_1_std", "value": 2.970876262345635}, {"type": "nauc_mrr_at_20_diff1", "value": 38.128248653080966}, {"type": "nauc_mrr_at_20_max", "value": 24.952026253076998}, {"type": "nauc_mrr_at_20_std", "value": 2.006922052216995}, {"type": "nauc_mrr_at_3_diff1", "value": 40.075767014514504}, {"type": "nauc_mrr_at_3_max", "value": 26.543876767823356}, {"type": "nauc_mrr_at_3_std", "value": 1.4758229539915473}, {"type": "nauc_mrr_at_5_diff1", "value": 38.27626231450101}, {"type": "nauc_mrr_at_5_max", "value": 25.554184166817123}, {"type": "nauc_mrr_at_5_std", "value": 1.5289469743765285}, {"type": "nauc_ndcg_at_1000_diff1", "value": 35.81305711429328}, {"type": "nauc_ndcg_at_1000_max", "value": 21.462375611808884}, {"type": "nauc_ndcg_at_1000_std", "value": 4.37817577864403}, {"type": "nauc_ndcg_at_100_diff1", "value": 35.931470390569075}, {"type": "nauc_ndcg_at_100_max", "value": 21.320619926273025}, {"type": "nauc_ndcg_at_100_std", "value": 3.261613822378584}, {"type": "nauc_ndcg_at_10_diff1", "value": 36.309714091319485}, {"type": "nauc_ndcg_at_10_max", "value": 21.024554037914257}, {"type": "nauc_ndcg_at_10_std", "value": 0.34537778188330615}, {"type": "nauc_ndcg_at_1_diff1", "value": 48.15437534861672}, {"type": "nauc_ndcg_at_1_max", "value": 28.63543344473674}, {"type": "nauc_ndcg_at_1_std", "value": 2.970876262345635}, {"type": "nauc_ndcg_at_20_diff1", "value": 36.55637547214553}, {"type": "nauc_ndcg_at_20_max", "value": 21.054973880940498}, {"type": "nauc_ndcg_at_20_std", "value": 1.255923276642131}, {"type": "nauc_ndcg_at_3_diff1", "value": 38.83527890609877}, {"type": "nauc_ndcg_at_3_max", "value": 24.39276594538154}, {"type": "nauc_ndcg_at_3_std", "value": -0.11070216705281503}, {"type": "nauc_ndcg_at_5_diff1", "value": 36.320235850347025}, {"type": "nauc_ndcg_at_5_max", "value": 22.25222313573669}, {"type": "nauc_ndcg_at_5_std", "value": 0.24418344534659714}, {"type": "nauc_precision_at_1000_diff1", "value": 1.3553366783310352}, {"type": "nauc_precision_at_1000_max", "value": 12.71154662811487}, {"type": "nauc_precision_at_1000_std", "value": 14.501530463627166}, {"type": "nauc_precision_at_100_diff1", "value": 13.594445633079498}, {"type": "nauc_precision_at_100_max", "value": 22.831050695945486}, {"type": "nauc_precision_at_100_std", "value": 12.58168655119079}, {"type": "nauc_precision_at_10_diff1", "value": 24.370335349509663}, {"type": "nauc_precision_at_10_max", "value": 22.87333144912103}, {"type": "nauc_precision_at_10_std", "value": 2.9640170457571395}, {"type": "nauc_precision_at_1_diff1", "value": 48.15437534861672}, {"type": "nauc_precision_at_1_max", "value": 28.63543344473674}, {"type": "nauc_precision_at_1_std", "value": 2.970876262345635}, {"type": "nauc_precision_at_20_diff1", "value": 22.437172356428768}, {"type": "nauc_precision_at_20_max", "value": 22.84883486847393}, {"type": "nauc_precision_at_20_std", "value": 5.539373045213645}, {"type": "nauc_precision_at_3_diff1", "value": 32.80281631101501}, {"type": "nauc_precision_at_3_max", "value": 26.749107103708347}, {"type": "nauc_precision_at_3_std", "value": 2.083560285617921}, {"type": "nauc_precision_at_5_diff1", "value": 25.857893194609087}, {"type": "nauc_precision_at_5_max", "value": 24.006008172789514}, {"type": "nauc_precision_at_5_std", "value": 2.6470647298583816}, {"type": "nauc_recall_at_1000_diff1", "value": 21.271914690867405}, {"type": "nauc_recall_at_1000_max", "value": 10.8254772553339}, {"type": "nauc_recall_at_1000_std", "value": 24.222690055658997}, {"type": "nauc_recall_at_100_diff1", "value": 24.83018631818402}, {"type": "nauc_recall_at_100_max", "value": 12.260027028539406}, {"type": "nauc_recall_at_100_std", "value": 11.721583106210975}, {"type": "nauc_recall_at_10_diff1", "value": 28.25565512580088}, {"type": "nauc_recall_at_10_max", "value": 14.450763859357815}, {"type": "nauc_recall_at_10_std", "value": 0.7801836768161626}, {"type": "nauc_recall_at_1_diff1", "value": 49.32991220296194}, {"type": "nauc_recall_at_1_max", "value": 24.83395680944923}, {"type": "nauc_recall_at_1_std", "value": -0.7479527140782966}, {"type": "nauc_recall_at_20_diff1", "value": 28.871593968850156}, {"type": "nauc_recall_at_20_max", "value": 13.961700743219929}, {"type": "nauc_recall_at_20_std", "value": 3.5643293197299615}, {"type": "nauc_recall_at_3_diff1", "value": 32.57328129531904}, {"type": "nauc_recall_at_3_max", "value": 20.433413425310835}, {"type": "nauc_recall_at_3_std", "value": -1.247044503598521}, {"type": "nauc_recall_at_5_diff1", "value": 28.028510688953183}, {"type": "nauc_recall_at_5_max", "value": 16.784307010617596}, {"type": "nauc_recall_at_5_std", "value": -0.009997139996257565}, {"type": "ndcg_at_1", "value": 16.939}, {"type": "ndcg_at_10", "value": 22.541}, {"type": "ndcg_at_100", "value": 27.921000000000003}, {"type": "ndcg_at_1000", "value": 31.102}, {"type": "ndcg_at_20", "value": 24.285999999999998}, {"type": "ndcg_at_3", "value": 19.304}, {"type": "ndcg_at_5", "value": 20.996000000000002}, {"type": "precision_at_1", "value": 16.939}, {"type": "precision_at_10", "value": 4.186999999999999}, {"type": "precision_at_100", "value": 0.851}, {"type": "precision_at_1000", "value": 0.131}, {"type": "precision_at_20", "value": 2.656}, {"type": "precision_at_3", "value": 8.919}, {"type": "precision_at_5", "value": 6.641}, {"type": "recall_at_1", "value": 13.925}, {"type": "recall_at_10", "value": 29.826999999999998}, {"type": "recall_at_100", "value": 53.76800000000001}, {"type": "recall_at_1000", "value": 75.994}, {"type": "recall_at_20", "value": 35.947}, {"type": "recall_at_3", "value": 20.929000000000002}, {"type": "recall_at_5", "value": 25.202999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackProgrammersRetrieval (default)", "type": "mteb/cqadupstack-programmers", "config": "default", "split": "test", "revision": "6184bc1440d2dbc7612be22b50686b8826d22b32"}, "metrics": [{"type": "main_score", "value": 16.89}, {"type": "map_at_1", "value": 9.166}, {"type": "map_at_10", "value": 13.538}, {"type": "map_at_100", "value": 14.338999999999999}, {"type": "map_at_1000", "value": 14.471}, {"type": "map_at_20", "value": 13.916999999999998}, {"type": "map_at_3", "value": 11.748}, {"type": "map_at_5", "value": 12.751000000000001}, {"type": "mrr_at_1", "value": 11.643835616438356}, {"type": "mrr_at_10", "value": 16.520575125027168}, {"type": "mrr_at_100", "value": 17.297302503248996}, {"type": "mrr_at_1000", "value": 17.398178665590223}, {"type": "mrr_at_20", "value": 16.91999523594904}, {"type": "mrr_at_3", "value": 14.573820395738199}, {"type": "mrr_at_5", "value": 15.646879756468794}, {"type": "nauc_map_at_1000_diff1", "value": 36.42648210684073}, {"type": "nauc_map_at_1000_max", "value": 23.014439347329745}, {"type": "nauc_map_at_1000_std", "value": 1.7167917957352532}, {"type": "nauc_map_at_100_diff1", "value": 36.41668695392086}, {"type": "nauc_map_at_100_max", "value": 22.95286918473154}, {"type": "nauc_map_at_100_std", "value": 1.6607854131698931}, {"type": "nauc_map_at_10_diff1", "value": 36.853249061667704}, {"type": "nauc_map_at_10_max", "value": 23.30746444964867}, {"type": "nauc_map_at_10_std", "value": 0.8047283371322353}, {"type": "nauc_map_at_1_diff1", "value": 47.16421621003639}, {"type": "nauc_map_at_1_max", "value": 27.34193393838306}, {"type": "nauc_map_at_1_std", "value": 0.6408395204554622}, {"type": "nauc_map_at_20_diff1", "value": 36.56584303750146}, {"type": "nauc_map_at_20_max", "value": 23.115780372564476}, {"type": "nauc_map_at_20_std", "value": 1.249550410204099}, {"type": "nauc_map_at_3_diff1", "value": 40.53580184557388}, {"type": "nauc_map_at_3_max", "value": 23.635347744137672}, {"type": "nauc_map_at_3_std", "value": 0.33170039388290995}, {"type": "nauc_map_at_5_diff1", "value": 37.81956825949432}, {"type": "nauc_map_at_5_max", "value": 23.801068349520698}, {"type": "nauc_map_at_5_std", "value": -0.05159349623603464}, {"type": "nauc_mrr_at_1000_diff1", "value": 33.82170381349714}, {"type": "nauc_mrr_at_1000_max", "value": 24.509695389655278}, {"type": "nauc_mrr_at_1000_std", "value": 0.38761162146831024}, {"type": "nauc_mrr_at_100_diff1", "value": 33.78083256685757}, {"type": "nauc_mrr_at_100_max", "value": 24.46949787827838}, {"type": "nauc_mrr_at_100_std", "value": 0.3727304295879898}, {"type": "nauc_mrr_at_10_diff1", "value": 34.04995222179279}, {"type": "nauc_mrr_at_10_max", "value": 24.844254940118603}, {"type": "nauc_mrr_at_10_std", "value": -0.09989395943351509}, {"type": "nauc_mrr_at_1_diff1", "value": 42.60409022051744}, {"type": "nauc_mrr_at_1_max", "value": 28.557152433476706}, {"type": "nauc_mrr_at_1_std", "value": -0.022054720915518654}, {"type": "nauc_mrr_at_20_diff1", "value": 33.87215561918837}, {"type": "nauc_mrr_at_20_max", "value": 24.678806836379767}, {"type": "nauc_mrr_at_20_std", "value": 0.07011412656469218}, {"type": "nauc_mrr_at_3_diff1", "value": 37.553351431355416}, {"type": "nauc_mrr_at_3_max", "value": 24.96142716696304}, {"type": "nauc_mrr_at_3_std", "value": 0.20818976575893774}, {"type": "nauc_mrr_at_5_diff1", "value": 34.990863336264105}, {"type": "nauc_mrr_at_5_max", "value": 25.149251424623092}, {"type": "nauc_mrr_at_5_std", "value": -0.36385730855435344}, {"type": "nauc_ndcg_at_1000_diff1", "value": 31.521772887139164}, {"type": "nauc_ndcg_at_1000_max", "value": 21.820611295854476}, {"type": "nauc_ndcg_at_1000_std", "value": 5.744438883711709}, {"type": "nauc_ndcg_at_100_diff1", "value": 30.860742071525365}, {"type": "nauc_ndcg_at_100_max", "value": 20.333360034062228}, {"type": "nauc_ndcg_at_100_std", "value": 4.817571323412305}, {"type": "nauc_ndcg_at_10_diff1", "value": 32.02591793840569}, {"type": "nauc_ndcg_at_10_max", "value": 22.327582801844766}, {"type": "nauc_ndcg_at_10_std", "value": 1.308815569375002}, {"type": "nauc_ndcg_at_1_diff1", "value": 42.60409022051744}, {"type": "nauc_ndcg_at_1_max", "value": 28.557152433476706}, {"type": "nauc_ndcg_at_1_std", "value": -0.022054720915518654}, {"type": "nauc_ndcg_at_20_diff1", "value": 31.183844509937447}, {"type": "nauc_ndcg_at_20_max", "value": 21.710204283748464}, {"type": "nauc_ndcg_at_20_std", "value": 2.3543373338618716}, {"type": "nauc_ndcg_at_3_diff1", "value": 37.757093644477195}, {"type": "nauc_ndcg_at_3_max", "value": 23.3515751628835}, {"type": "nauc_ndcg_at_3_std", "value": 0.5117507109615564}, {"type": "nauc_ndcg_at_5_diff1", "value": 33.80970150542254}, {"type": "nauc_ndcg_at_5_max", "value": 23.377489792676403}, {"type": "nauc_ndcg_at_5_std", "value": -0.2893341840565308}, {"type": "nauc_precision_at_1000_diff1", "value": 3.707208967665837}, {"type": "nauc_precision_at_1000_max", "value": 12.034292018846514}, {"type": "nauc_precision_at_1000_std", "value": 6.802731430305505}, {"type": "nauc_precision_at_100_diff1", "value": 12.426875443830042}, {"type": "nauc_precision_at_100_max", "value": 12.988732249870225}, {"type": "nauc_precision_at_100_std", "value": 11.037489289119383}, {"type": "nauc_precision_at_10_diff1", "value": 19.964451016510218}, {"type": "nauc_precision_at_10_max", "value": 21.483257270810522}, {"type": "nauc_precision_at_10_std", "value": 2.2065598381345053}, {"type": "nauc_precision_at_1_diff1", "value": 42.60409022051744}, {"type": "nauc_precision_at_1_max", "value": 28.557152433476706}, {"type": "nauc_precision_at_1_std", "value": -0.022054720915518654}, {"type": "nauc_precision_at_20_diff1", "value": 17.519760734491374}, {"type": "nauc_precision_at_20_max", "value": 19.42156895187867}, {"type": "nauc_precision_at_20_std", "value": 5.58566386311753}, {"type": "nauc_precision_at_3_diff1", "value": 30.863362948010643}, {"type": "nauc_precision_at_3_max", "value": 21.97149191045173}, {"type": "nauc_precision_at_3_std", "value": -0.10795969935082905}, {"type": "nauc_precision_at_5_diff1", "value": 24.57403889839064}, {"type": "nauc_precision_at_5_max", "value": 23.330523157159384}, {"type": "nauc_precision_at_5_std", "value": -0.5736565687187795}, {"type": "nauc_recall_at_1000_diff1", "value": 21.845537827759255}, {"type": "nauc_recall_at_1000_max", "value": 16.85933147171258}, {"type": "nauc_recall_at_1000_std", "value": 22.408020236230566}, {"type": "nauc_recall_at_100_diff1", "value": 19.987143599818943}, {"type": "nauc_recall_at_100_max", "value": 10.475075018778545}, {"type": "nauc_recall_at_100_std", "value": 13.795219707527833}, {"type": "nauc_recall_at_10_diff1", "value": 22.012495555108874}, {"type": "nauc_recall_at_10_max", "value": 17.742806672295814}, {"type": "nauc_recall_at_10_std", "value": 3.3663340109082194}, {"type": "nauc_recall_at_1_diff1", "value": 47.16421621003639}, {"type": "nauc_recall_at_1_max", "value": 27.34193393838306}, {"type": "nauc_recall_at_1_std", "value": 0.6408395204554622}, {"type": "nauc_recall_at_20_diff1", "value": 20.24245341403342}, {"type": "nauc_recall_at_20_max", "value": 16.292684691149837}, {"type": "nauc_recall_at_20_std", "value": 5.732480922479413}, {"type": "nauc_recall_at_3_diff1", "value": 34.061353914493004}, {"type": "nauc_recall_at_3_max", "value": 19.701505268864018}, {"type": "nauc_recall_at_3_std", "value": 0.15707036102604408}, {"type": "nauc_recall_at_5_diff1", "value": 25.41386728745299}, {"type": "nauc_recall_at_5_max", "value": 19.7756818671563}, {"type": "nauc_recall_at_5_std", "value": -1.0264446116247112}, {"type": "ndcg_at_1", "value": 11.644}, {"type": "ndcg_at_10", "value": 16.89}, {"type": "ndcg_at_100", "value": 21.104}, {"type": "ndcg_at_1000", "value": 24.669}, {"type": "ndcg_at_20", "value": 18.195}, {"type": "ndcg_at_3", "value": 13.350999999999999}, {"type": "ndcg_at_5", "value": 15.02}, {"type": "precision_at_1", "value": 11.644}, {"type": "precision_at_10", "value": 3.276}, {"type": "precision_at_100", "value": 0.652}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_20", "value": 2.043}, {"type": "precision_at_3", "value": 6.3549999999999995}, {"type": "precision_at_5", "value": 4.8629999999999995}, {"type": "recall_at_1", "value": 9.166}, {"type": "recall_at_10", "value": 24.38}, {"type": "recall_at_100", "value": 43.174}, {"type": "recall_at_1000", "value": 69.063}, {"type": "recall_at_20", "value": 28.89}, {"type": "recall_at_3", "value": 14.674999999999999}, {"type": "recall_at_5", "value": 18.864}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval (default)", "type": "CQADupstackRetrieval_is_a_combined_dataset", "config": "default", "split": "test", "revision": "CQADupstackRetrieval_is_a_combined_dataset"}, "metrics": [{"type": "main_score", "value": 19.451833333333333}, {"type": "ndcg_at_10", "value": 19.451833333333333}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackStatsRetrieval (default)", "type": "mteb/cqadupstack-stats", "config": "default", "split": "test", "revision": "65ac3a16b8e91f9cee4c9828cc7c335575432a2a"}, "metrics": [{"type": "main_score", "value": 15.190000000000001}, {"type": "map_at_1", "value": 8.588}, {"type": "map_at_10", "value": 12.491}, {"type": "map_at_100", "value": 13.181000000000001}, {"type": "map_at_1000", "value": 13.272}, {"type": "map_at_20", "value": 12.803}, {"type": "map_at_3", "value": 11.171000000000001}, {"type": "map_at_5", "value": 11.792}, {"type": "mrr_at_1", "value": 10.122699386503067}, {"type": "mrr_at_10", "value": 14.334769695199148}, {"type": "mrr_at_100", "value": 15.038531985477402}, {"type": "mrr_at_1000", "value": 15.118584906152948}, {"type": "mrr_at_20", "value": 14.643456341375582}, {"type": "mrr_at_3", "value": 13.011247443762786}, {"type": "mrr_at_5", "value": 13.586400817995917}, {"type": "nauc_map_at_1000_diff1", "value": 32.49525214361852}, {"type": "nauc_map_at_1000_max", "value": 25.00989242287795}, {"type": "nauc_map_at_1000_std", "value": -6.0481296083442215}, {"type": "nauc_map_at_100_diff1", "value": 32.58412301567017}, {"type": "nauc_map_at_100_max", "value": 25.00710798346013}, {"type": "nauc_map_at_100_std", "value": -6.027212357257859}, {"type": "nauc_map_at_10_diff1", "value": 32.59408959509193}, {"type": "nauc_map_at_10_max", "value": 25.590812515768057}, {"type": "nauc_map_at_10_std", "value": -6.723358516793515}, {"type": "nauc_map_at_1_diff1", "value": 39.31467044788035}, {"type": "nauc_map_at_1_max", "value": 30.076159948793276}, {"type": "nauc_map_at_1_std", "value": -7.409917402741314}, {"type": "nauc_map_at_20_diff1", "value": 32.59390259000842}, {"type": "nauc_map_at_20_max", "value": 25.24747833386027}, {"type": "nauc_map_at_20_std", "value": -6.327479010788288}, {"type": "nauc_map_at_3_diff1", "value": 34.27305943120105}, {"type": "nauc_map_at_3_max", "value": 27.325746934815616}, {"type": "nauc_map_at_3_std", "value": -7.588768866133594}, {"type": "nauc_map_at_5_diff1", "value": 33.084018261535256}, {"type": "nauc_map_at_5_max", "value": 26.240785153709425}, {"type": "nauc_map_at_5_std", "value": -7.145825000341606}, {"type": "nauc_mrr_at_1000_diff1", "value": 32.13146292629234}, {"type": "nauc_mrr_at_1000_max", "value": 27.012685186249}, {"type": "nauc_mrr_at_1000_std", "value": -3.576499416328648}, {"type": "nauc_mrr_at_100_diff1", "value": 32.1598198156621}, {"type": "nauc_mrr_at_100_max", "value": 26.99007757074476}, {"type": "nauc_mrr_at_100_std", "value": -3.5328041627513387}, {"type": "nauc_mrr_at_10_diff1", "value": 32.2769559954424}, {"type": "nauc_mrr_at_10_max", "value": 27.671797146230915}, {"type": "nauc_mrr_at_10_std", "value": -4.014326165260914}, {"type": "nauc_mrr_at_1_diff1", "value": 39.49445020079931}, {"type": "nauc_mrr_at_1_max", "value": 32.47498778564666}, {"type": "nauc_mrr_at_1_std", "value": -3.9005316134362285}, {"type": "nauc_mrr_at_20_diff1", "value": 32.1506954430531}, {"type": "nauc_mrr_at_20_max", "value": 27.21472311716892}, {"type": "nauc_mrr_at_20_std", "value": -3.8339274287542295}, {"type": "nauc_mrr_at_3_diff1", "value": 34.213957754732874}, {"type": "nauc_mrr_at_3_max", "value": 29.81396274867843}, {"type": "nauc_mrr_at_3_std", "value": -4.242564017046673}, {"type": "nauc_mrr_at_5_diff1", "value": 32.79023586229421}, {"type": "nauc_mrr_at_5_max", "value": 28.563242912189224}, {"type": "nauc_mrr_at_5_std", "value": -4.347078530440767}, {"type": "nauc_ndcg_at_1000_diff1", "value": 28.030132389809143}, {"type": "nauc_ndcg_at_1000_max", "value": 20.521142889145125}, {"type": "nauc_ndcg_at_1000_std", "value": -3.4641513799298465}, {"type": "nauc_ndcg_at_100_diff1", "value": 29.790867206467205}, {"type": "nauc_ndcg_at_100_max", "value": 20.777998695211025}, {"type": "nauc_ndcg_at_100_std", "value": -3.082355174684713}, {"type": "nauc_ndcg_at_10_diff1", "value": 29.99477135479973}, {"type": "nauc_ndcg_at_10_max", "value": 23.59847010475954}, {"type": "nauc_ndcg_at_10_std", "value": -5.388778425113355}, {"type": "nauc_ndcg_at_1_diff1", "value": 39.49445020079931}, {"type": "nauc_ndcg_at_1_max", "value": 32.47498778564666}, {"type": "nauc_ndcg_at_1_std", "value": -3.9005316134362285}, {"type": "nauc_ndcg_at_20_diff1", "value": 29.832962796031044}, {"type": "nauc_ndcg_at_20_max", "value": 22.19789441941385}, {"type": "nauc_ndcg_at_20_std", "value": -4.678750624503098}, {"type": "nauc_ndcg_at_3_diff1", "value": 33.28264932851035}, {"type": "nauc_ndcg_at_3_max", "value": 27.237791722895505}, {"type": "nauc_ndcg_at_3_std", "value": -6.42213360173857}, {"type": "nauc_ndcg_at_5_diff1", "value": 31.131290570314228}, {"type": "nauc_ndcg_at_5_max", "value": 25.12722717817001}, {"type": "nauc_ndcg_at_5_std", "value": -6.150569476219248}, {"type": "nauc_precision_at_1000_diff1", "value": 9.392568676712683}, {"type": "nauc_precision_at_1000_max", "value": 11.20864013974632}, {"type": "nauc_precision_at_1000_std", "value": 5.320810472292775}, {"type": "nauc_precision_at_100_diff1", "value": 23.329271108392348}, {"type": "nauc_precision_at_100_max", "value": 15.096990134028458}, {"type": "nauc_precision_at_100_std", "value": 6.463877644271909}, {"type": "nauc_precision_at_10_diff1", "value": 26.07195079393671}, {"type": "nauc_precision_at_10_max", "value": 23.315213833722375}, {"type": "nauc_precision_at_10_std", "value": -0.7973933486646361}, {"type": "nauc_precision_at_1_diff1", "value": 39.49445020079931}, {"type": "nauc_precision_at_1_max", "value": 32.47498778564666}, {"type": "nauc_precision_at_1_std", "value": -3.9005316134362285}, {"type": "nauc_precision_at_20_diff1", "value": 26.006356559701437}, {"type": "nauc_precision_at_20_max", "value": 20.64452647574728}, {"type": "nauc_precision_at_20_std", "value": 1.186976191997027}, {"type": "nauc_precision_at_3_diff1", "value": 31.349575990830747}, {"type": "nauc_precision_at_3_max", "value": 27.619655967592983}, {"type": "nauc_precision_at_3_std", "value": -3.5875703843406144}, {"type": "nauc_precision_at_5_diff1", "value": 28.056629721139153}, {"type": "nauc_precision_at_5_max", "value": 24.93477215782415}, {"type": "nauc_precision_at_5_std", "value": -2.07688747626092}, {"type": "nauc_recall_at_1000_diff1", "value": 11.939738127565153}, {"type": "nauc_recall_at_1000_max", "value": 3.1013420342149427}, {"type": "nauc_recall_at_1000_std", "value": 0.42106295882988565}, {"type": "nauc_recall_at_100_diff1", "value": 23.1148888679206}, {"type": "nauc_recall_at_100_max", "value": 7.879492884697378}, {"type": "nauc_recall_at_100_std", "value": 1.9008293630458633}, {"type": "nauc_recall_at_10_diff1", "value": 23.290862746428513}, {"type": "nauc_recall_at_10_max", "value": 16.127629443707487}, {"type": "nauc_recall_at_10_std", "value": -4.448472009523851}, {"type": "nauc_recall_at_1_diff1", "value": 39.31467044788035}, {"type": "nauc_recall_at_1_max", "value": 30.076159948793276}, {"type": "nauc_recall_at_1_std", "value": -7.409917402741314}, {"type": "nauc_recall_at_20_diff1", "value": 23.189927344334322}, {"type": "nauc_recall_at_20_max", "value": 12.404091273454796}, {"type": "nauc_recall_at_20_std", "value": -3.1379735901683317}, {"type": "nauc_recall_at_3_diff1", "value": 29.35343707457242}, {"type": "nauc_recall_at_3_max", "value": 23.518636184215154}, {"type": "nauc_recall_at_3_std", "value": -6.676520147409216}, {"type": "nauc_recall_at_5_diff1", "value": 25.982556962678487}, {"type": "nauc_recall_at_5_max", "value": 19.86486077269299}, {"type": "nauc_recall_at_5_std", "value": -6.003801784768082}, {"type": "ndcg_at_1", "value": 10.123}, {"type": "ndcg_at_10", "value": 15.190000000000001}, {"type": "ndcg_at_100", "value": 19.052}, {"type": "ndcg_at_1000", "value": 21.769}, {"type": "ndcg_at_20", "value": 16.298000000000002}, {"type": "ndcg_at_3", "value": 12.589}, {"type": "ndcg_at_5", "value": 13.535}, {"type": "precision_at_1", "value": 10.123}, {"type": "precision_at_10", "value": 2.6839999999999997}, {"type": "precision_at_100", "value": 0.503}, {"type": "precision_at_1000", "value": 0.08}, {"type": "precision_at_20", "value": 1.603}, {"type": "precision_at_3", "value": 5.726}, {"type": "precision_at_5", "value": 4.109999999999999}, {"type": "recall_at_1", "value": 8.588}, {"type": "recall_at_10", "value": 21.834}, {"type": "recall_at_100", "value": 40.309}, {"type": "recall_at_1000", "value": 61.208}, {"type": "recall_at_20", "value": 26.070999999999998}, {"type": "recall_at_3", "value": 14.399000000000001}, {"type": "recall_at_5", "value": 16.875999999999998}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackTexRetrieval (default)", "type": "mteb/cqadupstack-tex", "config": "default", "split": "test", "revision": "46989137a86843e03a6195de44b09deda022eec7"}, "metrics": [{"type": "main_score", "value": 11.503}, {"type": "map_at_1", "value": 6.542000000000001}, {"type": "map_at_10", "value": 9.411999999999999}, {"type": "map_at_100", "value": 10.030999999999999}, {"type": "map_at_1000", "value": 10.14}, {"type": "map_at_20", "value": 9.724}, {"type": "map_at_3", "value": 8.509}, {"type": "map_at_5", "value": 8.965}, {"type": "mrr_at_1", "value": 8.121128699242945}, {"type": "mrr_at_10", "value": 11.487303225947409}, {"type": "mrr_at_100", "value": 12.144070985687668}, {"type": "mrr_at_1000", "value": 12.23492200312306}, {"type": "mrr_at_20", "value": 11.824789289064652}, {"type": "mrr_at_3", "value": 10.438173893094747}, {"type": "mrr_at_5", "value": 10.945744436797428}, {"type": "nauc_map_at_1000_diff1", "value": 32.70276581980958}, {"type": "nauc_map_at_1000_max", "value": 16.03417959943129}, {"type": "nauc_map_at_1000_std", "value": -5.72561310082251}, {"type": "nauc_map_at_100_diff1", "value": 32.74170233438755}, {"type": "nauc_map_at_100_max", "value": 16.007188000450924}, {"type": "nauc_map_at_100_std", "value": -5.866527320820588}, {"type": "nauc_map_at_10_diff1", "value": 33.65756116022195}, {"type": "nauc_map_at_10_max", "value": 16.329704041974207}, {"type": "nauc_map_at_10_std", "value": -6.532157318286642}, {"type": "nauc_map_at_1_diff1", "value": 42.13696871713339}, {"type": "nauc_map_at_1_max", "value": 17.632090262590623}, {"type": "nauc_map_at_1_std", "value": -7.011301507001842}, {"type": "nauc_map_at_20_diff1", "value": 32.96793409764783}, {"type": "nauc_map_at_20_max", "value": 16.11279519186098}, {"type": "nauc_map_at_20_std", "value": -6.316702747144485}, {"type": "nauc_map_at_3_diff1", "value": 35.85582815528229}, {"type": "nauc_map_at_3_max", "value": 17.119718606824765}, {"type": "nauc_map_at_3_std", "value": -6.75128616063151}, {"type": "nauc_map_at_5_diff1", "value": 34.703608964177015}, {"type": "nauc_map_at_5_max", "value": 16.774418221756946}, {"type": "nauc_map_at_5_std", "value": -6.7924413895275135}, {"type": "nauc_mrr_at_1000_diff1", "value": 33.25123047452874}, {"type": "nauc_mrr_at_1000_max", "value": 17.664781297091984}, {"type": "nauc_mrr_at_1000_std", "value": -4.883960114347252}, {"type": "nauc_mrr_at_100_diff1", "value": 33.26376684107494}, {"type": "nauc_mrr_at_100_max", "value": 17.660366713140917}, {"type": "nauc_mrr_at_100_std", "value": -4.936094906621694}, {"type": "nauc_mrr_at_10_diff1", "value": 34.14453970601731}, {"type": "nauc_mrr_at_10_max", "value": 18.078450957158427}, {"type": "nauc_mrr_at_10_std", "value": -5.56029931021929}, {"type": "nauc_mrr_at_1_diff1", "value": 42.624124463773974}, {"type": "nauc_mrr_at_1_max", "value": 19.644592703779377}, {"type": "nauc_mrr_at_1_std", "value": -6.847467406875957}, {"type": "nauc_mrr_at_20_diff1", "value": 33.48658556695367}, {"type": "nauc_mrr_at_20_max", "value": 17.854173270865513}, {"type": "nauc_mrr_at_20_std", "value": -5.307384000928626}, {"type": "nauc_mrr_at_3_diff1", "value": 36.42777944064556}, {"type": "nauc_mrr_at_3_max", "value": 18.818021509412347}, {"type": "nauc_mrr_at_3_std", "value": -5.971767723227725}, {"type": "nauc_mrr_at_5_diff1", "value": 35.26890794067812}, {"type": "nauc_mrr_at_5_max", "value": 18.536432127845615}, {"type": "nauc_mrr_at_5_std", "value": -5.955315816111514}, {"type": "nauc_ndcg_at_1000_diff1", "value": 26.787545842668386}, {"type": "nauc_ndcg_at_1000_max", "value": 14.668417213125176}, {"type": "nauc_ndcg_at_1000_std", "value": 0.11283761427226682}, {"type": "nauc_ndcg_at_100_diff1", "value": 27.296346462130778}, {"type": "nauc_ndcg_at_100_max", "value": 14.628630017107083}, {"type": "nauc_ndcg_at_100_std", "value": -2.5838126321301287}, {"type": "nauc_ndcg_at_10_diff1", "value": 30.729975615630583}, {"type": "nauc_ndcg_at_10_max", "value": 15.984165870709463}, {"type": "nauc_ndcg_at_10_std", "value": -5.795796151010406}, {"type": "nauc_ndcg_at_1_diff1", "value": 42.624124463773974}, {"type": "nauc_ndcg_at_1_max", "value": 19.644592703779377}, {"type": "nauc_ndcg_at_1_std", "value": -6.847467406875957}, {"type": "nauc_ndcg_at_20_diff1", "value": 28.62024015680217}, {"type": "nauc_ndcg_at_20_max", "value": 15.22451859400659}, {"type": "nauc_ndcg_at_20_std", "value": -5.156813837280861}, {"type": "nauc_ndcg_at_3_diff1", "value": 34.82831844406019}, {"type": "nauc_ndcg_at_3_max", "value": 17.789223218636945}, {"type": "nauc_ndcg_at_3_std", "value": -6.383595531284539}, {"type": "nauc_ndcg_at_5_diff1", "value": 32.85603864688551}, {"type": "nauc_ndcg_at_5_max", "value": 17.05358609428122}, {"type": "nauc_ndcg_at_5_std", "value": -6.376667913153048}, {"type": "nauc_precision_at_1000_diff1", "value": 11.468656684649677}, {"type": "nauc_precision_at_1000_max", "value": 15.320322507806294}, {"type": "nauc_precision_at_1000_std", "value": 16.669904386742214}, {"type": "nauc_precision_at_100_diff1", "value": 17.31311828660998}, {"type": "nauc_precision_at_100_max", "value": 17.18604042044477}, {"type": "nauc_precision_at_100_std", "value": 6.921989479762083}, {"type": "nauc_precision_at_10_diff1", "value": 24.341600277154242}, {"type": "nauc_precision_at_10_max", "value": 18.290595240997305}, {"type": "nauc_precision_at_10_std", "value": -3.249248531480952}, {"type": "nauc_precision_at_1_diff1", "value": 42.624124463773974}, {"type": "nauc_precision_at_1_max", "value": 19.644592703779377}, {"type": "nauc_precision_at_1_std", "value": -6.847467406875957}, {"type": "nauc_precision_at_20_diff1", "value": 19.67933630715089}, {"type": "nauc_precision_at_20_max", "value": 17.708788971071886}, {"type": "nauc_precision_at_20_std", "value": -1.698058343596388}, {"type": "nauc_precision_at_3_diff1", "value": 32.56407923967103}, {"type": "nauc_precision_at_3_max", "value": 20.008945086974204}, {"type": "nauc_precision_at_3_std", "value": -5.700587196952845}, {"type": "nauc_precision_at_5_diff1", "value": 28.910777719175375}, {"type": "nauc_precision_at_5_max", "value": 19.181013952415274}, {"type": "nauc_precision_at_5_std", "value": -5.09856965471284}, {"type": "nauc_recall_at_1000_diff1", "value": 12.396394270885589}, {"type": "nauc_recall_at_1000_max", "value": 8.239418701743709}, {"type": "nauc_recall_at_1000_std", "value": 15.546192718064672}, {"type": "nauc_recall_at_100_diff1", "value": 15.657113708258077}, {"type": "nauc_recall_at_100_max", "value": 9.7558897450188}, {"type": "nauc_recall_at_100_std", "value": 3.7828006481678327}, {"type": "nauc_recall_at_10_diff1", "value": 23.540703764594824}, {"type": "nauc_recall_at_10_max", "value": 12.514108862838025}, {"type": "nauc_recall_at_10_std", "value": -4.890712777213581}, {"type": "nauc_recall_at_1_diff1", "value": 42.13696871713339}, {"type": "nauc_recall_at_1_max", "value": 17.632090262590623}, {"type": "nauc_recall_at_1_std", "value": -7.011301507001842}, {"type": "nauc_recall_at_20_diff1", "value": 18.632795869246763}, {"type": "nauc_recall_at_20_max", "value": 10.781667052463174}, {"type": "nauc_recall_at_20_std", "value": -3.3062758301873467}, {"type": "nauc_recall_at_3_diff1", "value": 29.84753634947647}, {"type": "nauc_recall_at_3_max", "value": 15.743144468924344}, {"type": "nauc_recall_at_3_std", "value": -6.214675269831871}, {"type": "nauc_recall_at_5_diff1", "value": 26.80447414490652}, {"type": "nauc_recall_at_5_max", "value": 14.403515700429177}, {"type": "nauc_recall_at_5_std", "value": -6.259205870944759}, {"type": "ndcg_at_1", "value": 8.121}, {"type": "ndcg_at_10", "value": 11.503}, {"type": "ndcg_at_100", "value": 14.951}, {"type": "ndcg_at_1000", "value": 18.196}, {"type": "ndcg_at_20", "value": 12.614}, {"type": "ndcg_at_3", "value": 9.743}, {"type": "ndcg_at_5", "value": 10.435}, {"type": "precision_at_1", "value": 8.121}, {"type": "precision_at_10", "value": 2.168}, {"type": "precision_at_100", "value": 0.468}, {"type": "precision_at_1000", "value": 0.089}, {"type": "precision_at_20", "value": 1.383}, {"type": "precision_at_3", "value": 4.6690000000000005}, {"type": "precision_at_5", "value": 3.345}, {"type": "recall_at_1", "value": 6.542000000000001}, {"type": "recall_at_10", "value": 15.794}, {"type": "recall_at_100", "value": 32.031}, {"type": "recall_at_1000", "value": 56.263}, {"type": "recall_at_20", "value": 20.023}, {"type": "recall_at_3", "value": 10.791}, {"type": "recall_at_5", "value": 12.61}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackUnixRetrieval (default)", "type": "mteb/cqadupstack-unix", "config": "default", "split": "test", "revision": "6c6430d3a6d36f8d2a829195bc5dc94d7e063e53"}, "metrics": [{"type": "main_score", "value": 18.752}, {"type": "map_at_1", "value": 12.076}, {"type": "map_at_10", "value": 15.886}, {"type": "map_at_100", "value": 16.525000000000002}, {"type": "map_at_1000", "value": 16.628}, {"type": "map_at_20", "value": 16.150000000000002}, {"type": "map_at_3", "value": 14.637}, {"type": "map_at_5", "value": 15.265999999999998}, {"type": "mrr_at_1", "value": 14.458955223880595}, {"type": "mrr_at_10", "value": 18.78960850509357}, {"type": "mrr_at_100", "value": 19.457515825168713}, {"type": "mrr_at_1000", "value": 19.544411963686347}, {"type": "mrr_at_20", "value": 19.069352610955498}, {"type": "mrr_at_3", "value": 17.50621890547264}, {"type": "mrr_at_5", "value": 18.135883084577117}, {"type": "nauc_map_at_1000_diff1", "value": 35.26453666091026}, {"type": "nauc_map_at_1000_max", "value": 28.45949873807009}, {"type": "nauc_map_at_1000_std", "value": -3.4139786458650603}, {"type": "nauc_map_at_100_diff1", "value": 35.26758793761312}, {"type": "nauc_map_at_100_max", "value": 28.427395341056673}, {"type": "nauc_map_at_100_std", "value": -3.494357914459209}, {"type": "nauc_map_at_10_diff1", "value": 35.748030827297846}, {"type": "nauc_map_at_10_max", "value": 28.709693519088635}, {"type": "nauc_map_at_10_std", "value": -4.0888030664931545}, {"type": "nauc_map_at_1_diff1", "value": 41.858308280129286}, {"type": "nauc_map_at_1_max", "value": 29.59713822513886}, {"type": "nauc_map_at_1_std", "value": -5.112958479444919}, {"type": "nauc_map_at_20_diff1", "value": 35.53257258132197}, {"type": "nauc_map_at_20_max", "value": 28.65465491465789}, {"type": "nauc_map_at_20_std", "value": -3.844442722241712}, {"type": "nauc_map_at_3_diff1", "value": 36.65786183200192}, {"type": "nauc_map_at_3_max", "value": 28.80283494555713}, {"type": "nauc_map_at_3_std", "value": -3.956759027099864}, {"type": "nauc_map_at_5_diff1", "value": 36.45785727569078}, {"type": "nauc_map_at_5_max", "value": 28.987265101067706}, {"type": "nauc_map_at_5_std", "value": -3.8836573002904364}, {"type": "nauc_mrr_at_1000_diff1", "value": 33.15170628844491}, {"type": "nauc_mrr_at_1000_max", "value": 29.80316660586958}, {"type": "nauc_mrr_at_1000_std", "value": -2.919368628674066}, {"type": "nauc_mrr_at_100_diff1", "value": 33.149497124475005}, {"type": "nauc_mrr_at_100_max", "value": 29.791578160522104}, {"type": "nauc_mrr_at_100_std", "value": -2.9631398714502812}, {"type": "nauc_mrr_at_10_diff1", "value": 33.55199061618286}, {"type": "nauc_mrr_at_10_max", "value": 30.069009995703794}, {"type": "nauc_mrr_at_10_std", "value": -3.6083857944611797}, {"type": "nauc_mrr_at_1_diff1", "value": 40.186482910894526}, {"type": "nauc_mrr_at_1_max", "value": 32.037574024173274}, {"type": "nauc_mrr_at_1_std", "value": -3.9185583280706497}, {"type": "nauc_mrr_at_20_diff1", "value": 33.29736140197984}, {"type": "nauc_mrr_at_20_max", "value": 29.987219611017764}, {"type": "nauc_mrr_at_20_std", "value": -3.2911243316613477}, {"type": "nauc_mrr_at_3_diff1", "value": 34.59766570016104}, {"type": "nauc_mrr_at_3_max", "value": 30.548093957699834}, {"type": "nauc_mrr_at_3_std", "value": -3.548724979573667}, {"type": "nauc_mrr_at_5_diff1", "value": 34.18658889496389}, {"type": "nauc_mrr_at_5_max", "value": 30.41947286010115}, {"type": "nauc_mrr_at_5_std", "value": -3.43375074675157}, {"type": "nauc_ndcg_at_1000_diff1", "value": 30.49383193075413}, {"type": "nauc_ndcg_at_1000_max", "value": 26.437945296729847}, {"type": "nauc_ndcg_at_1000_std", "value": 0.713575479477255}, {"type": "nauc_ndcg_at_100_diff1", "value": 30.39984801831684}, {"type": "nauc_ndcg_at_100_max", "value": 26.05310862803912}, {"type": "nauc_ndcg_at_100_std", "value": -0.9969079892996344}, {"type": "nauc_ndcg_at_10_diff1", "value": 32.67867574566094}, {"type": "nauc_ndcg_at_10_max", "value": 28.071536866518898}, {"type": "nauc_ndcg_at_10_std", "value": -4.0839672791072035}, {"type": "nauc_ndcg_at_1_diff1", "value": 40.186482910894526}, {"type": "nauc_ndcg_at_1_max", "value": 32.037574024173274}, {"type": "nauc_ndcg_at_1_std", "value": -3.9185583280706497}, {"type": "nauc_ndcg_at_20_diff1", "value": 31.87681672318583}, {"type": "nauc_ndcg_at_20_max", "value": 27.757429962292935}, {"type": "nauc_ndcg_at_20_std", "value": -3.289181709637281}, {"type": "nauc_ndcg_at_3_diff1", "value": 34.496401264219436}, {"type": "nauc_ndcg_at_3_max", "value": 29.14164273814545}, {"type": "nauc_ndcg_at_3_std", "value": -3.6284439880158454}, {"type": "nauc_ndcg_at_5_diff1", "value": 34.246766411944606}, {"type": "nauc_ndcg_at_5_max", "value": 28.94897772325865}, {"type": "nauc_ndcg_at_5_std", "value": -3.55118261356311}, {"type": "nauc_precision_at_1000_diff1", "value": 5.378065708185438}, {"type": "nauc_precision_at_1000_max", "value": 13.48764762389057}, {"type": "nauc_precision_at_1000_std", "value": 18.691426967517767}, {"type": "nauc_precision_at_100_diff1", "value": 13.43482265345938}, {"type": "nauc_precision_at_100_max", "value": 18.365831924084738}, {"type": "nauc_precision_at_100_std", "value": 9.235798636518911}, {"type": "nauc_precision_at_10_diff1", "value": 22.83462539079133}, {"type": "nauc_precision_at_10_max", "value": 28.88737216224709}, {"type": "nauc_precision_at_10_std", "value": -3.6618498163720496}, {"type": "nauc_precision_at_1_diff1", "value": 40.186482910894526}, {"type": "nauc_precision_at_1_max", "value": 32.037574024173274}, {"type": "nauc_precision_at_1_std", "value": -3.9185583280706497}, {"type": "nauc_precision_at_20_diff1", "value": 20.85661718188355}, {"type": "nauc_precision_at_20_max", "value": 27.64527011746391}, {"type": "nauc_precision_at_20_std", "value": -0.6120961992383614}, {"type": "nauc_precision_at_3_diff1", "value": 28.964157983970857}, {"type": "nauc_precision_at_3_max", "value": 29.400327308652884}, {"type": "nauc_precision_at_3_std", "value": -3.1499697700355336}, {"type": "nauc_precision_at_5_diff1", "value": 27.504587117367418}, {"type": "nauc_precision_at_5_max", "value": 30.07226208448269}, {"type": "nauc_precision_at_5_std", "value": -2.349913933244111}, {"type": "nauc_recall_at_1000_diff1", "value": 15.55962119542935}, {"type": "nauc_recall_at_1000_max", "value": 14.319938855591138}, {"type": "nauc_recall_at_1000_std", "value": 17.755185961944168}, {"type": "nauc_recall_at_100_diff1", "value": 17.13835133172289}, {"type": "nauc_recall_at_100_max", "value": 14.963855394840023}, {"type": "nauc_recall_at_100_std", "value": 6.03739710571083}, {"type": "nauc_recall_at_10_diff1", "value": 25.825685913064444}, {"type": "nauc_recall_at_10_max", "value": 23.892438517711863}, {"type": "nauc_recall_at_10_std", "value": -4.618370778838095}, {"type": "nauc_recall_at_1_diff1", "value": 41.858308280129286}, {"type": "nauc_recall_at_1_max", "value": 29.59713822513886}, {"type": "nauc_recall_at_1_std", "value": -5.112958479444919}, {"type": "nauc_recall_at_20_diff1", "value": 23.270446548799935}, {"type": "nauc_recall_at_20_max", "value": 22.676377474931055}, {"type": "nauc_recall_at_20_std", "value": -2.4631378318557635}, {"type": "nauc_recall_at_3_diff1", "value": 31.100368984587128}, {"type": "nauc_recall_at_3_max", "value": 27.09922934111932}, {"type": "nauc_recall_at_3_std", "value": -3.1714853286064946}, {"type": "nauc_recall_at_5_diff1", "value": 29.82135009500676}, {"type": "nauc_recall_at_5_max", "value": 26.424051798244985}, {"type": "nauc_recall_at_5_std", "value": -2.966236526459052}, {"type": "ndcg_at_1", "value": 14.459}, {"type": "ndcg_at_10", "value": 18.752}, {"type": "ndcg_at_100", "value": 22.488}, {"type": "ndcg_at_1000", "value": 25.463}, {"type": "ndcg_at_20", "value": 19.703}, {"type": "ndcg_at_3", "value": 16.317}, {"type": "ndcg_at_5", "value": 17.267}, {"type": "precision_at_1", "value": 14.459}, {"type": "precision_at_10", "value": 3.1530000000000005}, {"type": "precision_at_100", "value": 0.567}, {"type": "precision_at_1000", "value": 0.091}, {"type": "precision_at_20", "value": 1.8190000000000002}, {"type": "precision_at_3", "value": 7.369000000000001}, {"type": "precision_at_5", "value": 5.131}, {"type": "recall_at_1", "value": 12.076}, {"type": "recall_at_10", "value": 24.901999999999997}, {"type": "recall_at_100", "value": 42.535000000000004}, {"type": "recall_at_1000", "value": 64.786}, {"type": "recall_at_20", "value": 28.42}, {"type": "recall_at_3", "value": 17.871000000000002}, {"type": "recall_at_5", "value": 20.328}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWebmastersRetrieval (default)", "type": "mteb/cqadupstack-webmasters", "config": "default", "split": "test", "revision": "160c094312a0e1facb97e55eeddb698c0abe3571"}, "metrics": [{"type": "main_score", "value": 21.488}, {"type": "map_at_1", "value": 13.569999999999999}, {"type": "map_at_10", "value": 18.184}, {"type": "map_at_100", "value": 19.151}, {"type": "map_at_1000", "value": 19.331}, {"type": "map_at_20", "value": 18.619}, {"type": "map_at_3", "value": 16.666}, {"type": "map_at_5", "value": 17.73}, {"type": "mrr_at_1", "value": 17.193675889328063}, {"type": "mrr_at_10", "value": 21.833082376560643}, {"type": "mrr_at_100", "value": 22.67117038809971}, {"type": "mrr_at_1000", "value": 22.76433404351483}, {"type": "mrr_at_20", "value": 22.20200942617089}, {"type": "mrr_at_3", "value": 20.52042160737813}, {"type": "mrr_at_5", "value": 21.380105401844535}, {"type": "nauc_map_at_1000_diff1", "value": 35.49958838022679}, {"type": "nauc_map_at_1000_max", "value": 27.74062097598903}, {"type": "nauc_map_at_1000_std", "value": -10.515093354385309}, {"type": "nauc_map_at_100_diff1", "value": 35.56722100038519}, {"type": "nauc_map_at_100_max", "value": 27.827605374816354}, {"type": "nauc_map_at_100_std", "value": -10.631512595972834}, {"type": "nauc_map_at_10_diff1", "value": 35.91616127603119}, {"type": "nauc_map_at_10_max", "value": 28.165439663736507}, {"type": "nauc_map_at_10_std", "value": -11.08789520401649}, {"type": "nauc_map_at_1_diff1", "value": 43.19178740943906}, {"type": "nauc_map_at_1_max", "value": 30.877102640311726}, {"type": "nauc_map_at_1_std", "value": -14.165080939187726}, {"type": "nauc_map_at_20_diff1", "value": 35.79766863342843}, {"type": "nauc_map_at_20_max", "value": 28.059404735661243}, {"type": "nauc_map_at_20_std", "value": -11.072321333753566}, {"type": "nauc_map_at_3_diff1", "value": 37.897605640025475}, {"type": "nauc_map_at_3_max", "value": 28.177172477006117}, {"type": "nauc_map_at_3_std", "value": -12.136111183330279}, {"type": "nauc_map_at_5_diff1", "value": 36.44434777898687}, {"type": "nauc_map_at_5_max", "value": 28.438512971898394}, {"type": "nauc_map_at_5_std", "value": -10.926696695866928}, {"type": "nauc_mrr_at_1000_diff1", "value": 36.13714281845032}, {"type": "nauc_mrr_at_1000_max", "value": 26.282536844730803}, {"type": "nauc_mrr_at_1000_std", "value": -9.856391084807372}, {"type": "nauc_mrr_at_100_diff1", "value": 36.11260358526963}, {"type": "nauc_mrr_at_100_max", "value": 26.251055434341158}, {"type": "nauc_mrr_at_100_std", "value": -9.866249832625387}, {"type": "nauc_mrr_at_10_diff1", "value": 36.39768434891786}, {"type": "nauc_mrr_at_10_max", "value": 26.369874684734597}, {"type": "nauc_mrr_at_10_std", "value": -10.140677127064409}, {"type": "nauc_mrr_at_1_diff1", "value": 43.97681003969528}, {"type": "nauc_mrr_at_1_max", "value": 29.836613510418573}, {"type": "nauc_mrr_at_1_std", "value": -13.729257304690295}, {"type": "nauc_mrr_at_20_diff1", "value": 36.2936027454046}, {"type": "nauc_mrr_at_20_max", "value": 26.312955186456488}, {"type": "nauc_mrr_at_20_std", "value": -10.177068130665152}, {"type": "nauc_mrr_at_3_diff1", "value": 38.01813544163268}, {"type": "nauc_mrr_at_3_max", "value": 26.450298271894578}, {"type": "nauc_mrr_at_3_std", "value": -10.606258695223955}, {"type": "nauc_mrr_at_5_diff1", "value": 36.66139719774965}, {"type": "nauc_mrr_at_5_max", "value": 26.509309350284294}, {"type": "nauc_mrr_at_5_std", "value": -9.947243479271682}, {"type": "nauc_ndcg_at_1000_diff1", "value": 31.791493593552133}, {"type": "nauc_ndcg_at_1000_max", "value": 25.324361418674858}, {"type": "nauc_ndcg_at_1000_std", "value": -6.7443196116990425}, {"type": "nauc_ndcg_at_100_diff1", "value": 31.54953518236872}, {"type": "nauc_ndcg_at_100_max", "value": 25.188716359357414}, {"type": "nauc_ndcg_at_100_std", "value": -6.839894709820292}, {"type": "nauc_ndcg_at_10_diff1", "value": 33.098147949306394}, {"type": "nauc_ndcg_at_10_max", "value": 25.405004571973617}, {"type": "nauc_ndcg_at_10_std", "value": -9.445873172910993}, {"type": "nauc_ndcg_at_1_diff1", "value": 43.97681003969528}, {"type": "nauc_ndcg_at_1_max", "value": 29.836613510418573}, {"type": "nauc_ndcg_at_1_std", "value": -13.729257304690295}, {"type": "nauc_ndcg_at_20_diff1", "value": 32.92224490482159}, {"type": "nauc_ndcg_at_20_max", "value": 25.547859604065703}, {"type": "nauc_ndcg_at_20_std", "value": -9.241908708414929}, {"type": "nauc_ndcg_at_3_diff1", "value": 36.53902441073446}, {"type": "nauc_ndcg_at_3_max", "value": 25.133819114707258}, {"type": "nauc_ndcg_at_3_std", "value": -10.692158418093511}, {"type": "nauc_ndcg_at_5_diff1", "value": 33.95545160989453}, {"type": "nauc_ndcg_at_5_max", "value": 25.718632036099127}, {"type": "nauc_ndcg_at_5_std", "value": -9.232699386322327}, {"type": "nauc_precision_at_1000_diff1", "value": 0.7176996575689929}, {"type": "nauc_precision_at_1000_max", "value": -6.206679830059766}, {"type": "nauc_precision_at_1000_std", "value": 15.194409401229048}, {"type": "nauc_precision_at_100_diff1", "value": 6.0746313447861455}, {"type": "nauc_precision_at_100_max", "value": 1.8294518479685982}, {"type": "nauc_precision_at_100_std", "value": 8.37195469826675}, {"type": "nauc_precision_at_10_diff1", "value": 20.73981815339893}, {"type": "nauc_precision_at_10_max", "value": 15.478261828007453}, {"type": "nauc_precision_at_10_std", "value": -5.5561745194715275}, {"type": "nauc_precision_at_1_diff1", "value": 43.97681003969528}, {"type": "nauc_precision_at_1_max", "value": 29.836613510418573}, {"type": "nauc_precision_at_1_std", "value": -13.729257304690295}, {"type": "nauc_precision_at_20_diff1", "value": 19.796357243134437}, {"type": "nauc_precision_at_20_max", "value": 14.737729170595262}, {"type": "nauc_precision_at_20_std", "value": -1.9384122215911435}, {"type": "nauc_precision_at_3_diff1", "value": 31.865572834643885}, {"type": "nauc_precision_at_3_max", "value": 20.374070383077616}, {"type": "nauc_precision_at_3_std", "value": -8.278156186226331}, {"type": "nauc_precision_at_5_diff1", "value": 24.892982796410482}, {"type": "nauc_precision_at_5_max", "value": 18.471691298099184}, {"type": "nauc_precision_at_5_std", "value": -5.556018739034546}, {"type": "nauc_recall_at_1000_diff1", "value": 13.11384429793443}, {"type": "nauc_recall_at_1000_max", "value": 14.1557785679994}, {"type": "nauc_recall_at_1000_std", "value": 9.786662648320794}, {"type": "nauc_recall_at_100_diff1", "value": 18.975726964682863}, {"type": "nauc_recall_at_100_max", "value": 17.463053263913643}, {"type": "nauc_recall_at_100_std", "value": 5.193025295117909}, {"type": "nauc_recall_at_10_diff1", "value": 26.179450874152614}, {"type": "nauc_recall_at_10_max", "value": 21.634335314260436}, {"type": "nauc_recall_at_10_std", "value": -5.718314080956008}, {"type": "nauc_recall_at_1_diff1", "value": 43.19178740943906}, {"type": "nauc_recall_at_1_max", "value": 30.877102640311726}, {"type": "nauc_recall_at_1_std", "value": -14.165080939187726}, {"type": "nauc_recall_at_20_diff1", "value": 25.087605827678395}, {"type": "nauc_recall_at_20_max", "value": 20.130863094684713}, {"type": "nauc_recall_at_20_std", "value": -5.62005732659447}, {"type": "nauc_recall_at_3_diff1", "value": 32.74815068110827}, {"type": "nauc_recall_at_3_max", "value": 22.403658999564968}, {"type": "nauc_recall_at_3_std", "value": -8.683387701904735}, {"type": "nauc_recall_at_5_diff1", "value": 27.755340185938906}, {"type": "nauc_recall_at_5_max", "value": 23.586435487805275}, {"type": "nauc_recall_at_5_std", "value": -5.135301791301631}, {"type": "ndcg_at_1", "value": 17.194000000000003}, {"type": "ndcg_at_10", "value": 21.488}, {"type": "ndcg_at_100", "value": 26.150000000000002}, {"type": "ndcg_at_1000", "value": 29.805999999999997}, {"type": "ndcg_at_20", "value": 22.718}, {"type": "ndcg_at_3", "value": 19.434}, {"type": "ndcg_at_5", "value": 20.746000000000002}, {"type": "precision_at_1", "value": 17.194000000000003}, {"type": "precision_at_10", "value": 4.091}, {"type": "precision_at_100", "value": 0.931}, {"type": "precision_at_1000", "value": 0.18}, {"type": "precision_at_20", "value": 2.54}, {"type": "precision_at_3", "value": 9.354}, {"type": "precision_at_5", "value": 6.877}, {"type": "recall_at_1", "value": 13.569999999999999}, {"type": "recall_at_10", "value": 26.634999999999998}, {"type": "recall_at_100", "value": 49.457}, {"type": "recall_at_1000", "value": 74.978}, {"type": "recall_at_20", "value": 31.830000000000002}, {"type": "recall_at_3", "value": 20.014000000000003}, {"type": "recall_at_5", "value": 23.915}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWordpressRetrieval (default)", "type": "mteb/cqadupstack-wordpress", "config": "default", "split": "test", "revision": "4ffe81d471b1924886b33c7567bfb200e9eec5c4"}, "metrics": [{"type": "main_score", "value": 14.286999999999999}, {"type": "map_at_1", "value": 7.8}, {"type": "map_at_10", "value": 11.603}, {"type": "map_at_100", "value": 12.322}, {"type": "map_at_1000", "value": 12.424}, {"type": "map_at_20", "value": 11.917}, {"type": "map_at_3", "value": 10.241999999999999}, {"type": "map_at_5", "value": 10.894}, {"type": "mrr_at_1", "value": 8.687615526802219}, {"type": "mrr_at_10", "value": 12.827509315494535}, {"type": "mrr_at_100", "value": 13.569825117763369}, {"type": "mrr_at_1000", "value": 13.664616620933204}, {"type": "mrr_at_20", "value": 13.153434876243523}, {"type": "mrr_at_3", "value": 11.367837338262479}, {"type": "mrr_at_5", "value": 12.060998151571168}, {"type": "nauc_map_at_1000_diff1", "value": 21.953862709034876}, {"type": "nauc_map_at_1000_max", "value": 29.066372403463188}, {"type": "nauc_map_at_1000_std", "value": -7.250987758385709}, {"type": "nauc_map_at_100_diff1", "value": 21.93592696083288}, {"type": "nauc_map_at_100_max", "value": 29.045471554920262}, {"type": "nauc_map_at_100_std", "value": -7.347433609703373}, {"type": "nauc_map_at_10_diff1", "value": 22.272278874310526}, {"type": "nauc_map_at_10_max", "value": 29.620096522232625}, {"type": "nauc_map_at_10_std", "value": -7.56004907693945}, {"type": "nauc_map_at_1_diff1", "value": 29.70146011799996}, {"type": "nauc_map_at_1_max", "value": 33.6582002068041}, {"type": "nauc_map_at_1_std", "value": -11.43320242844524}, {"type": "nauc_map_at_20_diff1", "value": 22.06594846110943}, {"type": "nauc_map_at_20_max", "value": 29.4352137076757}, {"type": "nauc_map_at_20_std", "value": -7.640434271085226}, {"type": "nauc_map_at_3_diff1", "value": 23.260962069088908}, {"type": "nauc_map_at_3_max", "value": 29.85851009040783}, {"type": "nauc_map_at_3_std", "value": -8.493416631968287}, {"type": "nauc_map_at_5_diff1", "value": 21.67294210722253}, {"type": "nauc_map_at_5_max", "value": 30.00826915229784}, {"type": "nauc_map_at_5_std", "value": -8.443622415442166}, {"type": "nauc_mrr_at_1000_diff1", "value": 22.104239631860946}, {"type": "nauc_mrr_at_1000_max", "value": 28.258201262169408}, {"type": "nauc_mrr_at_1000_std", "value": -6.622347594933508}, {"type": "nauc_mrr_at_100_diff1", "value": 22.098536010618822}, {"type": "nauc_mrr_at_100_max", "value": 28.220245799295107}, {"type": "nauc_mrr_at_100_std", "value": -6.675059636819916}, {"type": "nauc_mrr_at_10_diff1", "value": 22.63401956823091}, {"type": "nauc_mrr_at_10_max", "value": 28.626927108349953}, {"type": "nauc_mrr_at_10_std", "value": -6.820539359416205}, {"type": "nauc_mrr_at_1_diff1", "value": 30.188275726076373}, {"type": "nauc_mrr_at_1_max", "value": 32.97489523305523}, {"type": "nauc_mrr_at_1_std", "value": -10.419791276142904}, {"type": "nauc_mrr_at_20_diff1", "value": 22.125155778128224}, {"type": "nauc_mrr_at_20_max", "value": 28.54628678699734}, {"type": "nauc_mrr_at_20_std", "value": -6.940802668158878}, {"type": "nauc_mrr_at_3_diff1", "value": 23.20363757655989}, {"type": "nauc_mrr_at_3_max", "value": 28.72037838694496}, {"type": "nauc_mrr_at_3_std", "value": -7.863052941940037}, {"type": "nauc_mrr_at_5_diff1", "value": 21.769709814351764}, {"type": "nauc_mrr_at_5_max", "value": 29.01182865041742}, {"type": "nauc_mrr_at_5_std", "value": -7.823698429495608}, {"type": "nauc_ndcg_at_1000_diff1", "value": 18.839399965777904}, {"type": "nauc_ndcg_at_1000_max", "value": 26.409685169340147}, {"type": "nauc_ndcg_at_1000_std", "value": -2.75323598669575}, {"type": "nauc_ndcg_at_100_diff1", "value": 18.980282228228756}, {"type": "nauc_ndcg_at_100_max", "value": 25.888915953926944}, {"type": "nauc_ndcg_at_100_std", "value": -4.247963667020685}, {"type": "nauc_ndcg_at_10_diff1", "value": 20.268021320985767}, {"type": "nauc_ndcg_at_10_max", "value": 28.007422388366308}, {"type": "nauc_ndcg_at_10_std", "value": -6.035880880912193}, {"type": "nauc_ndcg_at_1_diff1", "value": 30.188275726076373}, {"type": "nauc_ndcg_at_1_max", "value": 32.97489523305523}, {"type": "nauc_ndcg_at_1_std", "value": -10.419791276142904}, {"type": "nauc_ndcg_at_20_diff1", "value": 19.475382543592772}, {"type": "nauc_ndcg_at_20_max", "value": 27.783688816814124}, {"type": "nauc_ndcg_at_20_std", "value": -6.375668645265656}, {"type": "nauc_ndcg_at_3_diff1", "value": 21.17886661176787}, {"type": "nauc_ndcg_at_3_max", "value": 28.281440509906492}, {"type": "nauc_ndcg_at_3_std", "value": -7.544056618031584}, {"type": "nauc_ndcg_at_5_diff1", "value": 18.58832973791431}, {"type": "nauc_ndcg_at_5_max", "value": 28.724509771603614}, {"type": "nauc_ndcg_at_5_std", "value": -7.783318230914177}, {"type": "nauc_precision_at_1000_diff1", "value": 7.129904674618118}, {"type": "nauc_precision_at_1000_max", "value": 7.635578876601942}, {"type": "nauc_precision_at_1000_std", "value": 9.846306597273538}, {"type": "nauc_precision_at_100_diff1", "value": 11.813398381635091}, {"type": "nauc_precision_at_100_max", "value": 16.32313056743183}, {"type": "nauc_precision_at_100_std", "value": 4.336689858200671}, {"type": "nauc_precision_at_10_diff1", "value": 17.446504784777808}, {"type": "nauc_precision_at_10_max", "value": 25.408869205476464}, {"type": "nauc_precision_at_10_std", "value": -1.6572908083948488}, {"type": "nauc_precision_at_1_diff1", "value": 30.188275726076373}, {"type": "nauc_precision_at_1_max", "value": 32.97489523305523}, {"type": "nauc_precision_at_1_std", "value": -10.419791276142904}, {"type": "nauc_precision_at_20_diff1", "value": 14.91677316093746}, {"type": "nauc_precision_at_20_max", "value": 24.32645869103317}, {"type": "nauc_precision_at_20_std", "value": -2.9225394914435876}, {"type": "nauc_precision_at_3_diff1", "value": 16.841177267297603}, {"type": "nauc_precision_at_3_max", "value": 24.81824344898353}, {"type": "nauc_precision_at_3_std", "value": -6.548456214157852}, {"type": "nauc_precision_at_5_diff1", "value": 12.601361749535691}, {"type": "nauc_precision_at_5_max", "value": 25.662845341554753}, {"type": "nauc_precision_at_5_std", "value": -5.257813050604554}, {"type": "nauc_recall_at_1000_diff1", "value": 9.330142559611428}, {"type": "nauc_recall_at_1000_max", "value": 19.55092125312593}, {"type": "nauc_recall_at_1000_std", "value": 12.833888019795856}, {"type": "nauc_recall_at_100_diff1", "value": 12.93335051943625}, {"type": "nauc_recall_at_100_max", "value": 18.554303580780303}, {"type": "nauc_recall_at_100_std", "value": 2.904381331543482}, {"type": "nauc_recall_at_10_diff1", "value": 15.945414878900973}, {"type": "nauc_recall_at_10_max", "value": 24.45894683906371}, {"type": "nauc_recall_at_10_std", "value": -3.3285107959242257}, {"type": "nauc_recall_at_1_diff1", "value": 29.70146011799996}, {"type": "nauc_recall_at_1_max", "value": 33.6582002068041}, {"type": "nauc_recall_at_1_std", "value": -11.43320242844524}, {"type": "nauc_recall_at_20_diff1", "value": 14.54592581450925}, {"type": "nauc_recall_at_20_max", "value": 24.62940289531727}, {"type": "nauc_recall_at_20_std", "value": -4.525466630360646}, {"type": "nauc_recall_at_3_diff1", "value": 15.585536477830441}, {"type": "nauc_recall_at_3_max", "value": 25.217020737509433}, {"type": "nauc_recall_at_3_std", "value": -6.386554399226418}, {"type": "nauc_recall_at_5_diff1", "value": 11.641604418059668}, {"type": "nauc_recall_at_5_max", "value": 26.263641139012208}, {"type": "nauc_recall_at_5_std", "value": -6.77257050164422}, {"type": "ndcg_at_1", "value": 8.688}, {"type": "ndcg_at_10", "value": 14.286999999999999}, {"type": "ndcg_at_100", "value": 18.516}, {"type": "ndcg_at_1000", "value": 21.708}, {"type": "ndcg_at_20", "value": 15.436}, {"type": "ndcg_at_3", "value": 11.376999999999999}, {"type": "ndcg_at_5", "value": 12.551000000000002}, {"type": "precision_at_1", "value": 8.688}, {"type": "precision_at_10", "value": 2.458}, {"type": "precision_at_100", "value": 0.505}, {"type": "precision_at_1000", "value": 0.084}, {"type": "precision_at_20", "value": 1.534}, {"type": "precision_at_3", "value": 5.0520000000000005}, {"type": "precision_at_5", "value": 3.697}, {"type": "recall_at_1", "value": 7.8}, {"type": "recall_at_10", "value": 21.59}, {"type": "recall_at_100", "value": 42.101}, {"type": "recall_at_1000", "value": 67.259}, {"type": "recall_at_20", "value": 25.858999999999998}, {"type": "recall_at_3", "value": 13.506000000000002}, {"type": "recall_at_5", "value": 16.408}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER (default)", "type": "mteb/climate-fever", "config": "default", "split": "test", "revision": "47f2ac6acb640fc46020b02a5b59fdda04d39380"}, "metrics": [{"type": "main_score", "value": 12.91}, {"type": "map_at_1", "value": 5.244999999999999}, {"type": "map_at_10", "value": 8.533}, {"type": "map_at_100", "value": 9.562}, {"type": "map_at_1000", "value": 9.701}, {"type": "map_at_20", "value": 9.061}, {"type": "map_at_3", "value": 7.117}, {"type": "map_at_5", "value": 7.747999999999999}, {"type": "mrr_at_1", "value": 11.530944625407166}, {"type": "mrr_at_10", "value": 17.86644951140064}, {"type": "mrr_at_100", "value": 18.874326110051832}, {"type": "mrr_at_1000", "value": 18.94561511680038}, {"type": "mrr_at_20", "value": 18.47706797129705}, {"type": "mrr_at_3", "value": 15.27687296416939}, {"type": "mrr_at_5", "value": 16.576547231270354}, {"type": "nauc_map_at_1000_diff1", "value": 24.54420825290521}, {"type": "nauc_map_at_1000_max", "value": 3.897483834465137}, {"type": "nauc_map_at_1000_std", "value": 19.481805113255135}, {"type": "nauc_map_at_100_diff1", "value": 24.55555745351147}, {"type": "nauc_map_at_100_max", "value": 3.837582687861127}, {"type": "nauc_map_at_100_std", "value": 19.133723602277477}, {"type": "nauc_map_at_10_diff1", "value": 25.265812103632264}, {"type": "nauc_map_at_10_max", "value": 3.8492593876156564}, {"type": "nauc_map_at_10_std", "value": 16.506599027024237}, {"type": "nauc_map_at_1_diff1", "value": 33.94610398172728}, {"type": "nauc_map_at_1_max", "value": 1.6496908677205668}, {"type": "nauc_map_at_1_std", "value": 13.419972442438885}, {"type": "nauc_map_at_20_diff1", "value": 24.72824633420426}, {"type": "nauc_map_at_20_max", "value": 3.783475878999571}, {"type": "nauc_map_at_20_std", "value": 17.84509170410431}, {"type": "nauc_map_at_3_diff1", "value": 26.956755375738854}, {"type": "nauc_map_at_3_max", "value": 3.9095753462098775}, {"type": "nauc_map_at_3_std", "value": 14.346199792189863}, {"type": "nauc_map_at_5_diff1", "value": 26.151346472806736}, {"type": "nauc_map_at_5_max", "value": 3.6340429832669017}, {"type": "nauc_map_at_5_std", "value": 14.297502705786602}, {"type": "nauc_mrr_at_1000_diff1", "value": 23.268773463692998}, {"type": "nauc_mrr_at_1000_max", "value": 6.109347662338191}, {"type": "nauc_mrr_at_1000_std", "value": 19.22652674727219}, {"type": "nauc_mrr_at_100_diff1", "value": 23.269924125626535}, {"type": "nauc_mrr_at_100_max", "value": 6.120703236947665}, {"type": "nauc_mrr_at_100_std", "value": 19.2163581654434}, {"type": "nauc_mrr_at_10_diff1", "value": 23.52516707186784}, {"type": "nauc_mrr_at_10_max", "value": 6.237783397862627}, {"type": "nauc_mrr_at_10_std", "value": 18.18627288507101}, {"type": "nauc_mrr_at_1_diff1", "value": 27.584994677292034}, {"type": "nauc_mrr_at_1_max", "value": 3.822817171895031}, {"type": "nauc_mrr_at_1_std", "value": 13.580944806885068}, {"type": "nauc_mrr_at_20_diff1", "value": 23.18466877243556}, {"type": "nauc_mrr_at_20_max", "value": 6.071619184172904}, {"type": "nauc_mrr_at_20_std", "value": 18.860252064577328}, {"type": "nauc_mrr_at_3_diff1", "value": 24.39357898054709}, {"type": "nauc_mrr_at_3_max", "value": 6.496455479357357}, {"type": "nauc_mrr_at_3_std", "value": 16.58571208649782}, {"type": "nauc_mrr_at_5_diff1", "value": 23.789967014710673}, {"type": "nauc_mrr_at_5_max", "value": 6.741427679039848}, {"type": "nauc_mrr_at_5_std", "value": 16.87086607963999}, {"type": "nauc_ndcg_at_1000_diff1", "value": 21.749820902072695}, {"type": "nauc_ndcg_at_1000_max", "value": 4.86812498810872}, {"type": "nauc_ndcg_at_1000_std", "value": 31.235098248353726}, {"type": "nauc_ndcg_at_100_diff1", "value": 21.19681101249399}, {"type": "nauc_ndcg_at_100_max", "value": 4.6861370875702395}, {"type": "nauc_ndcg_at_100_std", "value": 27.272107521053297}, {"type": "nauc_ndcg_at_10_diff1", "value": 22.773032212350426}, {"type": "nauc_ndcg_at_10_max", "value": 4.9873425228251955}, {"type": "nauc_ndcg_at_10_std", "value": 19.5435742476801}, {"type": "nauc_ndcg_at_1_diff1", "value": 27.584994677292034}, {"type": "nauc_ndcg_at_1_max", "value": 3.822817171895031}, {"type": "nauc_ndcg_at_1_std", "value": 13.580944806885068}, {"type": "nauc_ndcg_at_20_diff1", "value": 21.438732145979834}, {"type": "nauc_ndcg_at_20_max", "value": 4.6005835605739245}, {"type": "nauc_ndcg_at_20_std", "value": 22.65431596849159}, {"type": "nauc_ndcg_at_3_diff1", "value": 24.490757645118904}, {"type": "nauc_ndcg_at_3_max", "value": 5.962800738138971}, {"type": "nauc_ndcg_at_3_std", "value": 16.307824488006986}, {"type": "nauc_ndcg_at_5_diff1", "value": 23.993915092342622}, {"type": "nauc_ndcg_at_5_max", "value": 5.236363764316798}, {"type": "nauc_ndcg_at_5_std", "value": 15.82938355562257}, {"type": "nauc_precision_at_1000_diff1", "value": 11.131036670513076}, {"type": "nauc_precision_at_1000_max", "value": 6.822816660809858}, {"type": "nauc_precision_at_1000_std", "value": 46.914426444389676}, {"type": "nauc_precision_at_100_diff1", "value": 10.955370605222562}, {"type": "nauc_precision_at_100_max", "value": 7.306594130327962}, {"type": "nauc_precision_at_100_std", "value": 40.6149528086222}, {"type": "nauc_precision_at_10_diff1", "value": 14.798768173392961}, {"type": "nauc_precision_at_10_max", "value": 8.747564896420851}, {"type": "nauc_precision_at_10_std", "value": 27.017329972663518}, {"type": "nauc_precision_at_1_diff1", "value": 27.584994677292034}, {"type": "nauc_precision_at_1_max", "value": 3.822817171895031}, {"type": "nauc_precision_at_1_std", "value": 13.580944806885068}, {"type": "nauc_precision_at_20_diff1", "value": 11.832837907912124}, {"type": "nauc_precision_at_20_max", "value": 7.84405782779581}, {"type": "nauc_precision_at_20_std", "value": 31.71828414369358}, {"type": "nauc_precision_at_3_diff1", "value": 18.994037151223843}, {"type": "nauc_precision_at_3_max", "value": 9.590257745908866}, {"type": "nauc_precision_at_3_std", "value": 19.0108385933672}, {"type": "nauc_precision_at_5_diff1", "value": 16.84707712963686}, {"type": "nauc_precision_at_5_max", "value": 10.064344353606588}, {"type": "nauc_precision_at_5_std", "value": 19.57545659630027}, {"type": "nauc_recall_at_1000_diff1", "value": 13.874751583251479}, {"type": "nauc_recall_at_1000_max", "value": 1.530199910786395}, {"type": "nauc_recall_at_1000_std", "value": 46.27128687120432}, {"type": "nauc_recall_at_100_diff1", "value": 13.1528347324774}, {"type": "nauc_recall_at_100_max", "value": 1.9375434916868963}, {"type": "nauc_recall_at_100_std", "value": 34.88493356061696}, {"type": "nauc_recall_at_10_diff1", "value": 18.04034405954142}, {"type": "nauc_recall_at_10_max", "value": 3.705815311091777}, {"type": "nauc_recall_at_10_std", "value": 21.901312599161166}, {"type": "nauc_recall_at_1_diff1", "value": 33.94610398172728}, {"type": "nauc_recall_at_1_max", "value": 1.6496908677205668}, {"type": "nauc_recall_at_1_std", "value": 13.419972442438885}, {"type": "nauc_recall_at_20_diff1", "value": 14.202376007797774}, {"type": "nauc_recall_at_20_max", "value": 2.2147147149777644}, {"type": "nauc_recall_at_20_std", "value": 27.12814167677131}, {"type": "nauc_recall_at_3_diff1", "value": 22.921929014221593}, {"type": "nauc_recall_at_3_max", "value": 5.495801553489075}, {"type": "nauc_recall_at_3_std", "value": 16.34255997562194}, {"type": "nauc_recall_at_5_diff1", "value": 20.706978570804146}, {"type": "nauc_recall_at_5_max", "value": 4.397716927561929}, {"type": "nauc_recall_at_5_std", "value": 15.316487242353569}, {"type": "ndcg_at_1", "value": 11.530999999999999}, {"type": "ndcg_at_10", "value": 12.91}, {"type": "ndcg_at_100", "value": 17.926000000000002}, {"type": "ndcg_at_1000", "value": 21.165}, {"type": "ndcg_at_20", "value": 14.793000000000001}, {"type": "ndcg_at_3", "value": 9.953}, {"type": "ndcg_at_5", "value": 10.847999999999999}, {"type": "precision_at_1", "value": 11.530999999999999}, {"type": "precision_at_10", "value": 4.247999999999999}, {"type": "precision_at_100", "value": 0.943}, {"type": "precision_at_1000", "value": 0.154}, {"type": "precision_at_20", "value": 2.902}, {"type": "precision_at_3", "value": 7.4270000000000005}, {"type": "precision_at_5", "value": 5.811}, {"type": "recall_at_1", "value": 5.244999999999999}, {"type": "recall_at_10", "value": 16.317999999999998}, {"type": "recall_at_100", "value": 34.201}, {"type": "recall_at_1000", "value": 53.069}, {"type": "recall_at_20", "value": 21.808}, {"type": "recall_at_3", "value": 9.167}, {"type": "recall_at_5", "value": 11.605}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia (default)", "type": "mteb/dbpedia", "config": "default", "split": "test", "revision": "c0f706b76e590d620bd6618b3ca8efdd34e2d659"}, "metrics": [{"type": "main_score", "value": 17.809}, {"type": "map_at_1", "value": 2.9080000000000004}, {"type": "map_at_10", "value": 6.72}, {"type": "map_at_100", "value": 9.452}, {"type": "map_at_1000", "value": 10.141}, {"type": "map_at_20", "value": 7.775}, {"type": "map_at_3", "value": 4.838}, {"type": "map_at_5", "value": 5.595}, {"type": "mrr_at_1", "value": 33.25}, {"type": "mrr_at_10", "value": 43.10208333333334}, {"type": "mrr_at_100", "value": 43.91155190635367}, {"type": "mrr_at_1000", "value": 43.942081922491234}, {"type": "mrr_at_20", "value": 43.53115904133708}, {"type": "mrr_at_3", "value": 40.37499999999999}, {"type": "mrr_at_5", "value": 41.937500000000014}, {"type": "nauc_map_at_1000_diff1", "value": 12.464843106371594}, {"type": "nauc_map_at_1000_max", "value": 20.787030702897695}, {"type": "nauc_map_at_1000_std", "value": 28.95839241630686}, {"type": "nauc_map_at_100_diff1", "value": 12.056329590233632}, {"type": "nauc_map_at_100_max", "value": 19.582266707899254}, {"type": "nauc_map_at_100_std", "value": 25.720291368581556}, {"type": "nauc_map_at_10_diff1", "value": 11.947408635481318}, {"type": "nauc_map_at_10_max", "value": 12.217216974254558}, {"type": "nauc_map_at_10_std", "value": 11.576137158486222}, {"type": "nauc_map_at_1_diff1", "value": 21.07052969340483}, {"type": "nauc_map_at_1_max", "value": 9.194196653066513}, {"type": "nauc_map_at_1_std", "value": 10.422057533092019}, {"type": "nauc_map_at_20_diff1", "value": 12.996950185313217}, {"type": "nauc_map_at_20_max", "value": 14.877459115978706}, {"type": "nauc_map_at_20_std", "value": 16.078479194353804}, {"type": "nauc_map_at_3_diff1", "value": 12.713931226731026}, {"type": "nauc_map_at_3_max", "value": 10.534051914774205}, {"type": "nauc_map_at_3_std", "value": 6.634455286829892}, {"type": "nauc_map_at_5_diff1", "value": 13.49610237252039}, {"type": "nauc_map_at_5_max", "value": 11.395460371209825}, {"type": "nauc_map_at_5_std", "value": 8.556070768754035}, {"type": "nauc_mrr_at_1000_diff1", "value": 23.440732029069466}, {"type": "nauc_mrr_at_1000_max", "value": 28.227169599675545}, {"type": "nauc_mrr_at_1000_std", "value": 24.271326293306412}, {"type": "nauc_mrr_at_100_diff1", "value": 23.431318332471474}, {"type": "nauc_mrr_at_100_max", "value": 28.247320676020777}, {"type": "nauc_mrr_at_100_std", "value": 24.289544335994325}, {"type": "nauc_mrr_at_10_diff1", "value": 23.10244787887524}, {"type": "nauc_mrr_at_10_max", "value": 28.230713760094805}, {"type": "nauc_mrr_at_10_std", "value": 23.872224687475942}, {"type": "nauc_mrr_at_1_diff1", "value": 27.28025238438753}, {"type": "nauc_mrr_at_1_max", "value": 29.836674855640243}, {"type": "nauc_mrr_at_1_std", "value": 25.025348142188943}, {"type": "nauc_mrr_at_20_diff1", "value": 23.359567556301606}, {"type": "nauc_mrr_at_20_max", "value": 28.045194655704407}, {"type": "nauc_mrr_at_20_std", "value": 24.13890939061388}, {"type": "nauc_mrr_at_3_diff1", "value": 23.223682067100583}, {"type": "nauc_mrr_at_3_max", "value": 26.838082016739516}, {"type": "nauc_mrr_at_3_std", "value": 22.74149701561025}, {"type": "nauc_mrr_at_5_diff1", "value": 23.254953330680365}, {"type": "nauc_mrr_at_5_max", "value": 27.731371603773923}, {"type": "nauc_mrr_at_5_std", "value": 23.673666153182165}, {"type": "nauc_ndcg_at_1000_diff1", "value": 16.257303689752668}, {"type": "nauc_ndcg_at_1000_max", "value": 20.372685600143058}, {"type": "nauc_ndcg_at_1000_std", "value": 43.5647262197375}, {"type": "nauc_ndcg_at_100_diff1", "value": 13.712668770381223}, {"type": "nauc_ndcg_at_100_max", "value": 17.3070502066831}, {"type": "nauc_ndcg_at_100_std", "value": 34.01332703454124}, {"type": "nauc_ndcg_at_10_diff1", "value": 15.272864554548784}, {"type": "nauc_ndcg_at_10_max", "value": 17.386211785825974}, {"type": "nauc_ndcg_at_10_std", "value": 25.093090359467173}, {"type": "nauc_ndcg_at_1_diff1", "value": 26.811305606655417}, {"type": "nauc_ndcg_at_1_max", "value": 21.81236974804081}, {"type": "nauc_ndcg_at_1_std", "value": 21.876218231165208}, {"type": "nauc_ndcg_at_20_diff1", "value": 15.570243759415145}, {"type": "nauc_ndcg_at_20_max", "value": 15.48792448315102}, {"type": "nauc_ndcg_at_20_std", "value": 24.906899062098667}, {"type": "nauc_ndcg_at_3_diff1", "value": 16.562964238706122}, {"type": "nauc_ndcg_at_3_max", "value": 19.01543958115029}, {"type": "nauc_ndcg_at_3_std", "value": 22.48353735036461}, {"type": "nauc_ndcg_at_5_diff1", "value": 16.232340125010094}, {"type": "nauc_ndcg_at_5_max", "value": 18.05687758131152}, {"type": "nauc_ndcg_at_5_std", "value": 22.85229110345859}, {"type": "nauc_precision_at_1000_diff1", "value": 11.56385665060498}, {"type": "nauc_precision_at_1000_max", "value": 20.681035939178482}, {"type": "nauc_precision_at_1000_std", "value": 36.897327543333354}, {"type": "nauc_precision_at_100_diff1", "value": 11.514032623059778}, {"type": "nauc_precision_at_100_max", "value": 29.047762650445875}, {"type": "nauc_precision_at_100_std", "value": 47.298484079525174}, {"type": "nauc_precision_at_10_diff1", "value": 9.30196384643561}, {"type": "nauc_precision_at_10_max", "value": 26.02930642801758}, {"type": "nauc_precision_at_10_std", "value": 33.683648923271505}, {"type": "nauc_precision_at_1_diff1", "value": 27.28025238438753}, {"type": "nauc_precision_at_1_max", "value": 29.836674855640243}, {"type": "nauc_precision_at_1_std", "value": 25.025348142188943}, {"type": "nauc_precision_at_20_diff1", "value": 12.53572220614082}, {"type": "nauc_precision_at_20_max", "value": 27.436119324419035}, {"type": "nauc_precision_at_20_std", "value": 37.4124720701224}, {"type": "nauc_precision_at_3_diff1", "value": 11.473474612430659}, {"type": "nauc_precision_at_3_max", "value": 25.108171747341117}, {"type": "nauc_precision_at_3_std", "value": 22.218903585707725}, {"type": "nauc_precision_at_5_diff1", "value": 11.651584386463366}, {"type": "nauc_precision_at_5_max", "value": 26.45472985167932}, {"type": "nauc_precision_at_5_std", "value": 25.45046633350586}, {"type": "nauc_recall_at_1000_diff1", "value": 8.952304094844058}, {"type": "nauc_recall_at_1000_max", "value": 6.398413185072366}, {"type": "nauc_recall_at_1000_std", "value": 43.77431410498004}, {"type": "nauc_recall_at_100_diff1", "value": 2.4342418404967687}, {"type": "nauc_recall_at_100_max", "value": 7.263012696368243}, {"type": "nauc_recall_at_100_std", "value": 29.36126458392181}, {"type": "nauc_recall_at_10_diff1", "value": 2.7077112127598997}, {"type": "nauc_recall_at_10_max", "value": 2.7599172986852833}, {"type": "nauc_recall_at_10_std", "value": 2.533785276895851}, {"type": "nauc_recall_at_1_diff1", "value": 21.07052969340483}, {"type": "nauc_recall_at_1_max", "value": 9.194196653066513}, {"type": "nauc_recall_at_1_std", "value": 10.422057533092019}, {"type": "nauc_recall_at_20_diff1", "value": 3.6472612051309605}, {"type": "nauc_recall_at_20_max", "value": 1.8491755772071496}, {"type": "nauc_recall_at_20_std", "value": 7.2724409200148274}, {"type": "nauc_recall_at_3_diff1", "value": 6.007910279710785}, {"type": "nauc_recall_at_3_max", "value": 4.734271875365448}, {"type": "nauc_recall_at_3_std", "value": 0.08424826705888651}, {"type": "nauc_recall_at_5_diff1", "value": 6.405796890104426}, {"type": "nauc_recall_at_5_max", "value": 5.069916025405803}, {"type": "nauc_recall_at_5_std", "value": 0.7763463942604057}, {"type": "ndcg_at_1", "value": 22.875}, {"type": "ndcg_at_10", "value": 17.809}, {"type": "ndcg_at_100", "value": 20.913}, {"type": "ndcg_at_1000", "value": 26.843}, {"type": "ndcg_at_20", "value": 17.688000000000002}, {"type": "ndcg_at_3", "value": 19.901}, {"type": "ndcg_at_5", "value": 18.587}, {"type": "precision_at_1", "value": 33.25}, {"type": "precision_at_10", "value": 16.025}, {"type": "precision_at_100", "value": 5.265000000000001}, {"type": "precision_at_1000", "value": 1.097}, {"type": "precision_at_20", "value": 12.188}, {"type": "precision_at_3", "value": 25.0}, {"type": "precision_at_5", "value": 20.65}, {"type": "recall_at_1", "value": 2.9080000000000004}, {"type": "recall_at_10", "value": 11.067}, {"type": "recall_at_100", "value": 26.874}, {"type": "recall_at_1000", "value": 47.693999999999996}, {"type": "recall_at_20", "value": 15.251999999999999}, {"type": "recall_at_3", "value": 6.065}, {"type": "recall_at_5", "value": 7.84}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification (default)", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 36.714999999999996}, {"type": "f1", "value": 33.535803051550175}, {"type": "f1_weighted", "value": 38.73741738231718}, {"type": "main_score", "value": 36.714999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER (default)", "type": "mteb/fever", "config": "default", "split": "test", "revision": "bea83ef9e8fb933d90a2f1d5515737465d613e12"}, "metrics": [{"type": "main_score", "value": 21.749}, {"type": "map_at_1", "value": 11.853}, {"type": "map_at_10", "value": 17.788999999999998}, {"type": "map_at_100", "value": 18.695}, {"type": "map_at_1000", "value": 18.783}, {"type": "map_at_20", "value": 18.279999999999998}, {"type": "map_at_3", "value": 15.488}, {"type": "map_at_5", "value": 16.766000000000002}, {"type": "mrr_at_1", "value": 12.57125712571257}, {"type": "mrr_at_10", "value": 18.809821458336327}, {"type": "mrr_at_100", "value": 19.746247724300634}, {"type": "mrr_at_1000", "value": 19.828660283641725}, {"type": "mrr_at_20", "value": 19.325603053511834}, {"type": "mrr_at_3", "value": 16.394139413941424}, {"type": "mrr_at_5", "value": 17.745774577457816}, {"type": "nauc_map_at_1000_diff1", "value": 20.42216628213536}, {"type": "nauc_map_at_1000_max", "value": 10.981655836421126}, {"type": "nauc_map_at_1000_std", "value": -11.06254344432782}, {"type": "nauc_map_at_100_diff1", "value": 20.430402218559234}, {"type": "nauc_map_at_100_max", "value": 10.946143961065747}, {"type": "nauc_map_at_100_std", "value": -11.067509219026796}, {"type": "nauc_map_at_10_diff1", "value": 20.633613948259416}, {"type": "nauc_map_at_10_max", "value": 10.749227715844583}, {"type": "nauc_map_at_10_std", "value": -11.683369497410549}, {"type": "nauc_map_at_1_diff1", "value": 25.93334856369996}, {"type": "nauc_map_at_1_max", "value": 11.756956805295456}, {"type": "nauc_map_at_1_std", "value": -15.812253616827613}, {"type": "nauc_map_at_20_diff1", "value": 20.53707678990591}, {"type": "nauc_map_at_20_max", "value": 10.852465838841702}, {"type": "nauc_map_at_20_std", "value": -11.300317053293336}, {"type": "nauc_map_at_3_diff1", "value": 21.197417138364173}, {"type": "nauc_map_at_3_max", "value": 10.400364426417779}, {"type": "nauc_map_at_3_std", "value": -13.649848120655465}, {"type": "nauc_map_at_5_diff1", "value": 20.84809728992014}, {"type": "nauc_map_at_5_max", "value": 10.503569044791474}, {"type": "nauc_map_at_5_std", "value": -12.308858242572567}, {"type": "nauc_mrr_at_1000_diff1", "value": 20.256963399952387}, {"type": "nauc_mrr_at_1000_max", "value": 11.527178442395032}, {"type": "nauc_mrr_at_1000_std", "value": -11.30536908201306}, {"type": "nauc_mrr_at_100_diff1", "value": 20.25440064656351}, {"type": "nauc_mrr_at_100_max", "value": 11.501764619959824}, {"type": "nauc_mrr_at_100_std", "value": -11.2998442261201}, {"type": "nauc_mrr_at_10_diff1", "value": 20.43696908799925}, {"type": "nauc_mrr_at_10_max", "value": 11.301632140198784}, {"type": "nauc_mrr_at_10_std", "value": -11.862198378461013}, {"type": "nauc_mrr_at_1_diff1", "value": 25.788068994261927}, {"type": "nauc_mrr_at_1_max", "value": 12.494106068654443}, {"type": "nauc_mrr_at_1_std", "value": -16.072022142157422}, {"type": "nauc_mrr_at_20_diff1", "value": 20.360762859316843}, {"type": "nauc_mrr_at_20_max", "value": 11.39368067763063}, {"type": "nauc_mrr_at_20_std", "value": -11.492483206429506}, {"type": "nauc_mrr_at_3_diff1", "value": 21.005337906582582}, {"type": "nauc_mrr_at_3_max", "value": 11.007636661630489}, {"type": "nauc_mrr_at_3_std", "value": -13.968861333278157}, {"type": "nauc_mrr_at_5_diff1", "value": 20.645981078269408}, {"type": "nauc_mrr_at_5_max", "value": 11.098139454539123}, {"type": "nauc_mrr_at_5_std", "value": -12.49821888423247}, {"type": "nauc_ndcg_at_1000_diff1", "value": 17.961862840683438}, {"type": "nauc_ndcg_at_1000_max", "value": 12.633382278961424}, {"type": "nauc_ndcg_at_1000_std", "value": -6.623628781829191}, {"type": "nauc_ndcg_at_100_diff1", "value": 17.947555297079322}, {"type": "nauc_ndcg_at_100_max", "value": 11.952176273790133}, {"type": "nauc_ndcg_at_100_std", "value": -6.732908920357083}, {"type": "nauc_ndcg_at_10_diff1", "value": 18.88944240845781}, {"type": "nauc_ndcg_at_10_max", "value": 10.931301252399257}, {"type": "nauc_ndcg_at_10_std", "value": -9.501435512141649}, {"type": "nauc_ndcg_at_1_diff1", "value": 25.788068994261927}, {"type": "nauc_ndcg_at_1_max", "value": 12.494106068654443}, {"type": "nauc_ndcg_at_1_std", "value": -16.072022142157422}, {"type": "nauc_ndcg_at_20_diff1", "value": 18.596170230193344}, {"type": "nauc_ndcg_at_20_max", "value": 11.240653699992258}, {"type": "nauc_ndcg_at_20_std", "value": -8.248089644433646}, {"type": "nauc_ndcg_at_3_diff1", "value": 19.899071290487075}, {"type": "nauc_ndcg_at_3_max", "value": 10.217579017596986}, {"type": "nauc_ndcg_at_3_std", "value": -13.092631082234583}, {"type": "nauc_ndcg_at_5_diff1", "value": 19.36942104398564}, {"type": "nauc_ndcg_at_5_max", "value": 10.43000193675244}, {"type": "nauc_ndcg_at_5_std", "value": -10.83023984824733}, {"type": "nauc_precision_at_1000_diff1", "value": 3.524222591189092}, {"type": "nauc_precision_at_1000_max", "value": 21.268005942647154}, {"type": "nauc_precision_at_1000_std", "value": 15.036228494768125}, {"type": "nauc_precision_at_100_diff1", "value": 9.81714899740422}, {"type": "nauc_precision_at_100_max", "value": 16.79030493724481}, {"type": "nauc_precision_at_100_std", "value": 8.132992070925313}, {"type": "nauc_precision_at_10_diff1", "value": 15.127575113065081}, {"type": "nauc_precision_at_10_max", "value": 11.83424711782065}, {"type": "nauc_precision_at_10_std", "value": -4.12398539713339}, {"type": "nauc_precision_at_1_diff1", "value": 25.788068994261927}, {"type": "nauc_precision_at_1_max", "value": 12.494106068654443}, {"type": "nauc_precision_at_1_std", "value": -16.072022142157422}, {"type": "nauc_precision_at_20_diff1", "value": 13.988365041285991}, {"type": "nauc_precision_at_20_max", "value": 12.982343769260144}, {"type": "nauc_precision_at_20_std", "value": 0.12831196857307875}, {"type": "nauc_precision_at_3_diff1", "value": 16.98591248173311}, {"type": "nauc_precision_at_3_max", "value": 10.076477872033717}, {"type": "nauc_precision_at_3_std", "value": -11.763027829441572}, {"type": "nauc_precision_at_5_diff1", "value": 16.109103361887634}, {"type": "nauc_precision_at_5_max", "value": 10.743629779747735}, {"type": "nauc_precision_at_5_std", "value": -7.223871485711275}, {"type": "nauc_recall_at_1000_diff1", "value": 7.300447723499678}, {"type": "nauc_recall_at_1000_max", "value": 21.050009113075134}, {"type": "nauc_recall_at_1000_std", "value": 14.78834446079826}, {"type": "nauc_recall_at_100_diff1", "value": 10.585202094510606}, {"type": "nauc_recall_at_100_max", "value": 14.3397259367012}, {"type": "nauc_recall_at_100_std", "value": 6.774673938241939}, {"type": "nauc_recall_at_10_diff1", "value": 14.740253776747794}, {"type": "nauc_recall_at_10_max", "value": 10.775882310785141}, {"type": "nauc_recall_at_10_std", "value": -4.212933280572477}, {"type": "nauc_recall_at_1_diff1", "value": 25.93334856369996}, {"type": "nauc_recall_at_1_max", "value": 11.756956805295456}, {"type": "nauc_recall_at_1_std", "value": -15.812253616827613}, {"type": "nauc_recall_at_20_diff1", "value": 13.917159385985588}, {"type": "nauc_recall_at_20_max", "value": 11.562519738362539}, {"type": "nauc_recall_at_20_std", "value": -0.6257023100650639}, {"type": "nauc_recall_at_3_diff1", "value": 16.79817894741575}, {"type": "nauc_recall_at_3_max", "value": 9.28528047744461}, {"type": "nauc_recall_at_3_std", "value": -11.417062993569289}, {"type": "nauc_recall_at_5_diff1", "value": 15.946724754389002}, {"type": "nauc_recall_at_5_max", "value": 9.701570943463285}, {"type": "nauc_recall_at_5_std", "value": -7.10641716237399}, {"type": "ndcg_at_1", "value": 12.570999999999998}, {"type": "ndcg_at_10", "value": 21.749}, {"type": "ndcg_at_100", "value": 26.627000000000002}, {"type": "ndcg_at_1000", "value": 29.211}, {"type": "ndcg_at_20", "value": 23.546}, {"type": "ndcg_at_3", "value": 16.938}, {"type": "ndcg_at_5", "value": 19.259}, {"type": "precision_at_1", "value": 12.570999999999998}, {"type": "precision_at_10", "value": 3.5970000000000004}, {"type": "precision_at_100", "value": 0.621}, {"type": "precision_at_1000", "value": 0.086}, {"type": "precision_at_20", "value": 2.183}, {"type": "precision_at_3", "value": 7.2059999999999995}, {"type": "precision_at_5", "value": 5.536}, {"type": "recall_at_1", "value": 11.853}, {"type": "recall_at_10", "value": 33.376}, {"type": "recall_at_100", "value": 56.714}, {"type": "recall_at_1000", "value": 77.03}, {"type": "recall_at_20", "value": 40.327}, {"type": "recall_at_3", "value": 20.26}, {"type": "recall_at_5", "value": 25.816}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018 (default)", "type": "mteb/fiqa", "config": "default", "split": "test", "revision": "27a168819829fe9bcd655c2df245fb19452e8e06"}, "metrics": [{"type": "main_score", "value": 9.92}, {"type": "map_at_1", "value": 4.127}, {"type": "map_at_10", "value": 6.8580000000000005}, {"type": "map_at_100", "value": 7.678}, {"type": "map_at_1000", "value": 7.8469999999999995}, {"type": "map_at_20", "value": 7.2459999999999996}, {"type": "map_at_3", "value": 5.695}, {"type": "map_at_5", "value": 6.321000000000001}, {"type": "mrr_at_1", "value": 8.487654320987655}, {"type": "mrr_at_10", "value": 13.07460072506369}, {"type": "mrr_at_100", "value": 13.994745653960623}, {"type": "mrr_at_1000", "value": 14.107792823690083}, {"type": "mrr_at_20", "value": 13.534501788196179}, {"type": "mrr_at_3", "value": 11.445473251028803}, {"type": "mrr_at_5", "value": 12.3559670781893}, {"type": "nauc_map_at_1000_diff1", "value": 32.0284999038968}, {"type": "nauc_map_at_1000_max", "value": 1.5433417591774994}, {"type": "nauc_map_at_1000_std", "value": -0.7522549236643168}, {"type": "nauc_map_at_100_diff1", "value": 32.10293650409455}, {"type": "nauc_map_at_100_max", "value": 1.331765813078503}, {"type": "nauc_map_at_100_std", "value": -0.9813834028863421}, {"type": "nauc_map_at_10_diff1", "value": 32.996281892439825}, {"type": "nauc_map_at_10_max", "value": 0.9000809223325343}, {"type": "nauc_map_at_10_std", "value": -1.1346437895544166}, {"type": "nauc_map_at_1_diff1", "value": 40.86108038715362}, {"type": "nauc_map_at_1_max", "value": 2.3646340186850976}, {"type": "nauc_map_at_1_std", "value": -1.1273734737305066}, {"type": "nauc_map_at_20_diff1", "value": 32.74666906672611}, {"type": "nauc_map_at_20_max", "value": 1.2905542892955657}, {"type": "nauc_map_at_20_std", "value": -0.9339025080151999}, {"type": "nauc_map_at_3_diff1", "value": 35.22245724674683}, {"type": "nauc_map_at_3_max", "value": 0.7682718438437706}, {"type": "nauc_map_at_3_std", "value": 0.12863043400502505}, {"type": "nauc_map_at_5_diff1", "value": 33.82974605887253}, {"type": "nauc_map_at_5_max", "value": 1.9127548750254273}, {"type": "nauc_map_at_5_std", "value": -1.0892042440032836}, {"type": "nauc_mrr_at_1000_diff1", "value": 26.492008408086686}, {"type": "nauc_mrr_at_1000_max", "value": 5.1988605475320995}, {"type": "nauc_mrr_at_1000_std", "value": -5.000717562564267}, {"type": "nauc_mrr_at_100_diff1", "value": 26.43042358484738}, {"type": "nauc_mrr_at_100_max", "value": 5.105015607758134}, {"type": "nauc_mrr_at_100_std", "value": -5.087762897442909}, {"type": "nauc_mrr_at_10_diff1", "value": 26.788604447191133}, {"type": "nauc_mrr_at_10_max", "value": 4.7186331678651845}, {"type": "nauc_mrr_at_10_std", "value": -5.004992425060064}, {"type": "nauc_mrr_at_1_diff1", "value": 32.279840763275516}, {"type": "nauc_mrr_at_1_max", "value": 2.24128577826757}, {"type": "nauc_mrr_at_1_std", "value": -7.11209805130024}, {"type": "nauc_mrr_at_20_diff1", "value": 26.648740524800157}, {"type": "nauc_mrr_at_20_max", "value": 5.032938733920583}, {"type": "nauc_mrr_at_20_std", "value": -4.909302508802945}, {"type": "nauc_mrr_at_3_diff1", "value": 29.41800019774434}, {"type": "nauc_mrr_at_3_max", "value": 4.4590853953847525}, {"type": "nauc_mrr_at_3_std", "value": -4.3297909365345735}, {"type": "nauc_mrr_at_5_diff1", "value": 27.962472533762323}, {"type": "nauc_mrr_at_5_max", "value": 5.263438068962538}, {"type": "nauc_mrr_at_5_std", "value": -4.758962874067143}, {"type": "nauc_ndcg_at_1000_diff1", "value": 24.911203582060345}, {"type": "nauc_ndcg_at_1000_max", "value": 4.8332507815090455}, {"type": "nauc_ndcg_at_1000_std", "value": 1.6141523218130944}, {"type": "nauc_ndcg_at_100_diff1", "value": 24.983661152779078}, {"type": "nauc_ndcg_at_100_max", "value": 2.304457345177104}, {"type": "nauc_ndcg_at_100_std", "value": -1.5897525359169224}, {"type": "nauc_ndcg_at_10_diff1", "value": 28.26656252033789}, {"type": "nauc_ndcg_at_10_max", "value": 1.7020081362468151}, {"type": "nauc_ndcg_at_10_std", "value": -1.8666662654279278}, {"type": "nauc_ndcg_at_1_diff1", "value": 32.279840763275516}, {"type": "nauc_ndcg_at_1_max", "value": 2.24128577826757}, {"type": "nauc_ndcg_at_1_std", "value": -7.11209805130024}, {"type": "nauc_ndcg_at_20_diff1", "value": 27.465206920750536}, {"type": "nauc_ndcg_at_20_max", "value": 2.5953555722799453}, {"type": "nauc_ndcg_at_20_std", "value": -1.5728415410381176}, {"type": "nauc_ndcg_at_3_diff1", "value": 30.920667289434967}, {"type": "nauc_ndcg_at_3_max", "value": 3.0636991383196537}, {"type": "nauc_ndcg_at_3_std", "value": -1.9109940966007124}, {"type": "nauc_ndcg_at_5_diff1", "value": 29.92826036454942}, {"type": "nauc_ndcg_at_5_max", "value": 4.131081055128095}, {"type": "nauc_ndcg_at_5_std", "value": -2.3878918992446225}, {"type": "nauc_precision_at_1000_diff1", "value": 3.260322987641696}, {"type": "nauc_precision_at_1000_max", "value": 17.68897292294318}, {"type": "nauc_precision_at_1000_std", "value": -2.3731970963497435}, {"type": "nauc_precision_at_100_diff1", "value": 9.563869576672285}, {"type": "nauc_precision_at_100_max", "value": 8.334908942965033}, {"type": "nauc_precision_at_100_std", "value": -5.8502185819543095}, {"type": "nauc_precision_at_10_diff1", "value": 19.4489082625378}, {"type": "nauc_precision_at_10_max", "value": 3.283292230263419}, {"type": "nauc_precision_at_10_std", "value": -3.474955077429711}, {"type": "nauc_precision_at_1_diff1", "value": 32.279840763275516}, {"type": "nauc_precision_at_1_max", "value": 2.24128577826757}, {"type": "nauc_precision_at_1_std", "value": -7.11209805130024}, {"type": "nauc_precision_at_20_diff1", "value": 16.689938201739743}, {"type": "nauc_precision_at_20_max", "value": 6.725444203867719}, {"type": "nauc_precision_at_20_std", "value": -4.064726266450374}, {"type": "nauc_precision_at_3_diff1", "value": 25.13225837931828}, {"type": "nauc_precision_at_3_max", "value": 4.838860499225599}, {"type": "nauc_precision_at_3_std", "value": -3.958929737721354}, {"type": "nauc_precision_at_5_diff1", "value": 24.021979813061318}, {"type": "nauc_precision_at_5_max", "value": 7.890864147142139}, {"type": "nauc_precision_at_5_std", "value": -5.108473581125845}, {"type": "nauc_recall_at_1000_diff1", "value": 11.754438596675685}, {"type": "nauc_recall_at_1000_max", "value": 2.6490978066853614}, {"type": "nauc_recall_at_1000_std", "value": 16.01878535704267}, {"type": "nauc_recall_at_100_diff1", "value": 13.38637240649497}, {"type": "nauc_recall_at_100_max", "value": -1.221302040775315}, {"type": "nauc_recall_at_100_std", "value": 1.157256497357066}, {"type": "nauc_recall_at_10_diff1", "value": 21.794818475196234}, {"type": "nauc_recall_at_10_max", "value": -0.3633267676365134}, {"type": "nauc_recall_at_10_std", "value": -0.895901919914364}, {"type": "nauc_recall_at_1_diff1", "value": 40.86108038715362}, {"type": "nauc_recall_at_1_max", "value": 2.3646340186850976}, {"type": "nauc_recall_at_1_std", "value": -1.1273734737305066}, {"type": "nauc_recall_at_20_diff1", "value": 19.87681298491174}, {"type": "nauc_recall_at_20_max", "value": 1.6730017285596162}, {"type": "nauc_recall_at_20_std", "value": -0.20426631986163188}, {"type": "nauc_recall_at_3_diff1", "value": 30.874288136679436}, {"type": "nauc_recall_at_3_max", "value": -0.3136634079590933}, {"type": "nauc_recall_at_3_std", "value": 2.5177179498883}, {"type": "nauc_recall_at_5_diff1", "value": 25.256571251371817}, {"type": "nauc_recall_at_5_max", "value": 3.682723691316816}, {"type": "nauc_recall_at_5_std", "value": -0.5339704756198042}, {"type": "ndcg_at_1", "value": 8.488}, {"type": "ndcg_at_10", "value": 9.92}, {"type": "ndcg_at_100", "value": 14.548}, {"type": "ndcg_at_1000", "value": 18.9}, {"type": "ndcg_at_20", "value": 11.359}, {"type": "ndcg_at_3", "value": 8.024000000000001}, {"type": "ndcg_at_5", "value": 8.792}, {"type": "precision_at_1", "value": 8.488}, {"type": "precision_at_10", "value": 2.932}, {"type": "precision_at_100", "value": 0.748}, {"type": "precision_at_1000", "value": 0.148}, {"type": "precision_at_20", "value": 2.0140000000000002}, {"type": "precision_at_3", "value": 5.556}, {"type": "precision_at_5", "value": 4.321}, {"type": "recall_at_1", "value": 4.127}, {"type": "recall_at_10", "value": 13.094}, {"type": "recall_at_100", "value": 31.837}, {"type": "recall_at_1000", "value": 59.553}, {"type": "recall_at_20", "value": 17.827}, {"type": "recall_at_3", "value": 7.384}, {"type": "recall_at_5", "value": 9.896}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA (default)", "type": "mteb/hotpotqa", "config": "default", "split": "test", "revision": "ab518f4d6fcca38d87c25209f94beba119d02014"}, "metrics": [{"type": "main_score", "value": 28.275}, {"type": "map_at_1", "value": 16.111}, {"type": "map_at_10", "value": 22.017}, {"type": "map_at_100", "value": 22.756999999999998}, {"type": "map_at_1000", "value": 22.847}, {"type": "map_at_20", "value": 22.422}, {"type": "map_at_3", "value": 20.358}, {"type": "map_at_5", "value": 21.333}, {"type": "mrr_at_1", "value": 32.22147197839298}, {"type": "mrr_at_10", "value": 38.66461421390523}, {"type": "mrr_at_100", "value": 39.322386407471846}, {"type": "mrr_at_1000", "value": 39.38317578333015}, {"type": "mrr_at_20", "value": 39.03936064723844}, {"type": "mrr_at_3", "value": 37.0380373621427}, {"type": "mrr_at_5", "value": 37.98739590366868}, {"type": "nauc_map_at_1000_diff1", "value": 52.641429993405374}, {"type": "nauc_map_at_1000_max", "value": 13.846349541182768}, {"type": "nauc_map_at_1000_std", "value": 21.234286433255207}, {"type": "nauc_map_at_100_diff1", "value": 52.657002815638506}, {"type": "nauc_map_at_100_max", "value": 13.85017253047762}, {"type": "nauc_map_at_100_std", "value": 21.152031928089446}, {"type": "nauc_map_at_10_diff1", "value": 52.99229334884495}, {"type": "nauc_map_at_10_max", "value": 14.018498788641875}, {"type": "nauc_map_at_10_std", "value": 20.280967836300796}, {"type": "nauc_map_at_1_diff1", "value": 62.48492577674589}, {"type": "nauc_map_at_1_max", "value": 17.17952567126258}, {"type": "nauc_map_at_1_std", "value": 14.948761034885164}, {"type": "nauc_map_at_20_diff1", "value": 52.79863501218778}, {"type": "nauc_map_at_20_max", "value": 13.948043219195666}, {"type": "nauc_map_at_20_std", "value": 20.7595845629364}, {"type": "nauc_map_at_3_diff1", "value": 54.858284695883874}, {"type": "nauc_map_at_3_max", "value": 15.306243909685097}, {"type": "nauc_map_at_3_std", "value": 18.364146093661798}, {"type": "nauc_map_at_5_diff1", "value": 53.64685588504633}, {"type": "nauc_map_at_5_max", "value": 14.539476850625293}, {"type": "nauc_map_at_5_std", "value": 19.26181960117483}, {"type": "nauc_mrr_at_1000_diff1", "value": 57.57231957804255}, {"type": "nauc_mrr_at_1000_max", "value": 15.03366896314471}, {"type": "nauc_mrr_at_1000_std", "value": 18.433684270599176}, {"type": "nauc_mrr_at_100_diff1", "value": 57.56183438457194}, {"type": "nauc_mrr_at_100_max", "value": 15.03096028096824}, {"type": "nauc_mrr_at_100_std", "value": 18.429416889726777}, {"type": "nauc_mrr_at_10_diff1", "value": 57.67734377743546}, {"type": "nauc_mrr_at_10_max", "value": 15.16017920205799}, {"type": "nauc_mrr_at_10_std", "value": 18.12061393467236}, {"type": "nauc_mrr_at_1_diff1", "value": 62.48492577674589}, {"type": "nauc_mrr_at_1_max", "value": 17.17952567126258}, {"type": "nauc_mrr_at_1_std", "value": 14.948761034885164}, {"type": "nauc_mrr_at_20_diff1", "value": 57.60348567562974}, {"type": "nauc_mrr_at_20_max", "value": 15.076107860913815}, {"type": "nauc_mrr_at_20_std", "value": 18.315578904649655}, {"type": "nauc_mrr_at_3_diff1", "value": 58.506133301922404}, {"type": "nauc_mrr_at_3_max", "value": 15.915584728445186}, {"type": "nauc_mrr_at_3_std", "value": 17.04808056180522}, {"type": "nauc_mrr_at_5_diff1", "value": 57.864519138851}, {"type": "nauc_mrr_at_5_max", "value": 15.432048897499834}, {"type": "nauc_mrr_at_5_std", "value": 17.501503102699093}, {"type": "nauc_ndcg_at_1000_diff1", "value": 50.81874302391767}, {"type": "nauc_ndcg_at_1000_max", "value": 12.126965970827337}, {"type": "nauc_ndcg_at_1000_std", "value": 26.109477652734558}, {"type": "nauc_ndcg_at_100_diff1", "value": 50.95009805524029}, {"type": "nauc_ndcg_at_100_max", "value": 12.295872662993116}, {"type": "nauc_ndcg_at_100_std", "value": 24.807604340476804}, {"type": "nauc_ndcg_at_10_diff1", "value": 52.20877593945092}, {"type": "nauc_ndcg_at_10_max", "value": 13.097936478311336}, {"type": "nauc_ndcg_at_10_std", "value": 21.647729284253273}, {"type": "nauc_ndcg_at_1_diff1", "value": 62.48492577674589}, {"type": "nauc_ndcg_at_1_max", "value": 17.17952567126258}, {"type": "nauc_ndcg_at_1_std", "value": 14.948761034885164}, {"type": "nauc_ndcg_at_20_diff1", "value": 51.660197131546795}, {"type": "nauc_ndcg_at_20_max", "value": 12.806424408705414}, {"type": "nauc_ndcg_at_20_std", "value": 22.845498945756106}, {"type": "nauc_ndcg_at_3_diff1", "value": 54.93829038994602}, {"type": "nauc_ndcg_at_3_max", "value": 15.126023161114087}, {"type": "nauc_ndcg_at_3_std", "value": 18.550528733148234}, {"type": "nauc_ndcg_at_5_diff1", "value": 53.22828386576709}, {"type": "nauc_ndcg_at_5_max", "value": 14.010347066037058}, {"type": "nauc_ndcg_at_5_std", "value": 19.741810905430523}, {"type": "nauc_precision_at_1000_diff1", "value": 25.685915909789987}, {"type": "nauc_precision_at_1000_max", "value": 1.8017828825425253}, {"type": "nauc_precision_at_1000_std", "value": 41.162880151457074}, {"type": "nauc_precision_at_100_diff1", "value": 32.092241320736}, {"type": "nauc_precision_at_100_max", "value": 4.604946834474919}, {"type": "nauc_precision_at_100_std", "value": 34.4563884520215}, {"type": "nauc_precision_at_10_diff1", "value": 41.65435929038311}, {"type": "nauc_precision_at_10_max", "value": 8.565743855294501}, {"type": "nauc_precision_at_10_std", "value": 26.21588053936351}, {"type": "nauc_precision_at_1_diff1", "value": 62.48492577674589}, {"type": "nauc_precision_at_1_max", "value": 17.17952567126258}, {"type": "nauc_precision_at_1_std", "value": 14.948761034885164}, {"type": "nauc_precision_at_20_diff1", "value": 38.94463410875179}, {"type": "nauc_precision_at_20_max", "value": 7.463676781280664}, {"type": "nauc_precision_at_20_std", "value": 29.137351869373944}, {"type": "nauc_precision_at_3_diff1", "value": 50.167835645184425}, {"type": "nauc_precision_at_3_max", "value": 13.751023116677993}, {"type": "nauc_precision_at_3_std", "value": 20.36965523817541}, {"type": "nauc_precision_at_5_diff1", "value": 45.636896593629885}, {"type": "nauc_precision_at_5_max", "value": 11.146676622303696}, {"type": "nauc_precision_at_5_std", "value": 22.338180446057095}, {"type": "nauc_recall_at_1000_diff1", "value": 25.68591590979012}, {"type": "nauc_recall_at_1000_max", "value": 1.801782882542605}, {"type": "nauc_recall_at_1000_std", "value": 41.162880151457124}, {"type": "nauc_recall_at_100_diff1", "value": 32.09224132073595}, {"type": "nauc_recall_at_100_max", "value": 4.604946834474883}, {"type": "nauc_recall_at_100_std", "value": 34.45638845202142}, {"type": "nauc_recall_at_10_diff1", "value": 41.65435929038313}, {"type": "nauc_recall_at_10_max", "value": 8.56574385529452}, {"type": "nauc_recall_at_10_std", "value": 26.215880539363507}, {"type": "nauc_recall_at_1_diff1", "value": 62.48492577674589}, {"type": "nauc_recall_at_1_max", "value": 17.17952567126258}, {"type": "nauc_recall_at_1_std", "value": 14.948761034885164}, {"type": "nauc_recall_at_20_diff1", "value": 38.94463410875175}, {"type": "nauc_recall_at_20_max", "value": 7.463676781280684}, {"type": "nauc_recall_at_20_std", "value": 29.13735186937395}, {"type": "nauc_recall_at_3_diff1", "value": 50.1678356451844}, {"type": "nauc_recall_at_3_max", "value": 13.751023116677974}, {"type": "nauc_recall_at_3_std", "value": 20.369655238175365}, {"type": "nauc_recall_at_5_diff1", "value": 45.63689659362988}, {"type": "nauc_recall_at_5_max", "value": 11.146676622303726}, {"type": "nauc_recall_at_5_std", "value": 22.33818044605712}, {"type": "ndcg_at_1", "value": 32.221}, {"type": "ndcg_at_10", "value": 28.275}, {"type": "ndcg_at_100", "value": 31.785000000000004}, {"type": "ndcg_at_1000", "value": 34.103}, {"type": "ndcg_at_20", "value": 29.593000000000004}, {"type": "ndcg_at_3", "value": 25.151}, {"type": "ndcg_at_5", "value": 26.752}, {"type": "precision_at_1", "value": 32.221}, {"type": "precision_at_10", "value": 6.1240000000000006}, {"type": "precision_at_100", "value": 0.893}, {"type": "precision_at_1000", "value": 0.12}, {"type": "precision_at_20", "value": 3.486}, {"type": "precision_at_3", "value": 15.737000000000002}, {"type": "precision_at_5", "value": 10.709}, {"type": "recall_at_1", "value": 16.111}, {"type": "recall_at_10", "value": 30.621}, {"type": "recall_at_100", "value": 44.625}, {"type": "recall_at_1000", "value": 60.141999999999996}, {"type": "recall_at_20", "value": 34.862}, {"type": "recall_at_3", "value": 23.605999999999998}, {"type": "recall_at_5", "value": 26.772000000000002}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification (default)", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 64.7572}, {"type": "ap", "value": 59.839874895528524}, {"type": "ap_weighted", "value": 59.839874895528524}, {"type": "f1", "value": 64.20337541365726}, {"type": "f1_weighted", "value": 64.20337541365727}, {"type": "main_score", "value": 64.7572}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO (default)", "type": "mteb/msmarco", "config": "default", "split": "test", "revision": "c5a29a104738b98a9e76336939199e264163d4a0"}, "metrics": [{"type": "main_score", "value": 24.82}, {"type": "map_at_1", "value": 0.735}, {"type": "map_at_10", "value": 3.9170000000000003}, {"type": "map_at_100", "value": 9.378}, {"type": "map_at_1000", "value": 11.623999999999999}, {"type": "map_at_20", "value": 5.618}, {"type": "map_at_3", "value": 1.7919999999999998}, {"type": "map_at_5", "value": 2.336}, {"type": "mrr_at_1", "value": 44.18604651162791}, {"type": "mrr_at_10", "value": 54.19896640826873}, {"type": "mrr_at_100", "value": 55.26573324528463}, {"type": "mrr_at_1000", "value": 55.27198566559285}, {"type": "mrr_at_20", "value": 54.99928247327269}, {"type": "mrr_at_3", "value": 53.100775193798455}, {"type": "mrr_at_5", "value": 53.100775193798455}, {"type": "nauc_map_at_1000_diff1", "value": 31.628273118548705}, {"type": "nauc_map_at_1000_max", "value": 50.91371464691997}, {"type": "nauc_map_at_1000_std", "value": 62.1629306739638}, {"type": "nauc_map_at_100_diff1", "value": 34.06331996720226}, {"type": "nauc_map_at_100_max", "value": 48.04779853755765}, {"type": "nauc_map_at_100_std", "value": 56.2169602632146}, {"type": "nauc_map_at_10_diff1", "value": 24.82479697995596}, {"type": "nauc_map_at_10_max", "value": 17.260532120473027}, {"type": "nauc_map_at_10_std", "value": 34.40317364487004}, {"type": "nauc_map_at_1_diff1", "value": 18.417203305727828}, {"type": "nauc_map_at_1_max", "value": 10.348745827965553}, {"type": "nauc_map_at_1_std", "value": 13.800647316428785}, {"type": "nauc_map_at_20_diff1", "value": 28.607666223966184}, {"type": "nauc_map_at_20_max", "value": 26.857097064842744}, {"type": "nauc_map_at_20_std", "value": 44.07803604009219}, {"type": "nauc_map_at_3_diff1", "value": 14.047344730269346}, {"type": "nauc_map_at_3_max", "value": 4.963953509469209}, {"type": "nauc_map_at_3_std", "value": 23.557463504489785}, {"type": "nauc_map_at_5_diff1", "value": 21.509241434242192}, {"type": "nauc_map_at_5_max", "value": 12.46882534029133}, {"type": "nauc_map_at_5_std", "value": 32.227877810916375}, {"type": "nauc_mrr_at_1000_diff1", "value": 31.63313657810774}, {"type": "nauc_mrr_at_1000_max", "value": 55.49699813296376}, {"type": "nauc_mrr_at_1000_std", "value": 49.41026226392305}, {"type": "nauc_mrr_at_100_diff1", "value": 31.6361977657553}, {"type": "nauc_mrr_at_100_max", "value": 55.504705533419596}, {"type": "nauc_mrr_at_100_std", "value": 49.40562252181147}, {"type": "nauc_mrr_at_10_diff1", "value": 30.70281063739253}, {"type": "nauc_mrr_at_10_max", "value": 55.03100675112251}, {"type": "nauc_mrr_at_10_std", "value": 50.24358852371792}, {"type": "nauc_mrr_at_1_diff1", "value": 26.866938946939406}, {"type": "nauc_mrr_at_1_max", "value": 53.65404374099094}, {"type": "nauc_mrr_at_1_std", "value": 37.860934759045406}, {"type": "nauc_mrr_at_20_diff1", "value": 31.999742146159587}, {"type": "nauc_mrr_at_20_max", "value": 55.37549959049349}, {"type": "nauc_mrr_at_20_std", "value": 49.84014367474812}, {"type": "nauc_mrr_at_3_diff1", "value": 32.72165006933737}, {"type": "nauc_mrr_at_3_max", "value": 54.57910637425508}, {"type": "nauc_mrr_at_3_std", "value": 50.88385330631171}, {"type": "nauc_mrr_at_5_diff1", "value": 32.72165006933737}, {"type": "nauc_mrr_at_5_max", "value": 54.57910637425508}, {"type": "nauc_mrr_at_5_std", "value": 50.88385330631171}, {"type": "nauc_ndcg_at_1000_diff1", "value": 38.246667176580495}, {"type": "nauc_ndcg_at_1000_max", "value": 49.41074648270727}, {"type": "nauc_ndcg_at_1000_std", "value": 58.77522494287387}, {"type": "nauc_ndcg_at_100_diff1", "value": 39.08660687104264}, {"type": "nauc_ndcg_at_100_max", "value": 51.17365801344417}, {"type": "nauc_ndcg_at_100_std", "value": 50.96489743248102}, {"type": "nauc_ndcg_at_10_diff1", "value": 35.52797859138293}, {"type": "nauc_ndcg_at_10_max", "value": 47.13047918089127}, {"type": "nauc_ndcg_at_10_std", "value": 47.525674912522156}, {"type": "nauc_ndcg_at_1_diff1", "value": 20.578863285213718}, {"type": "nauc_ndcg_at_1_max", "value": 33.573506875453205}, {"type": "nauc_ndcg_at_1_std", "value": 11.414153977938234}, {"type": "nauc_ndcg_at_20_diff1", "value": 36.05409218821747}, {"type": "nauc_ndcg_at_20_max", "value": 51.40798496195552}, {"type": "nauc_ndcg_at_20_std", "value": 50.81256309557642}, {"type": "nauc_ndcg_at_3_diff1", "value": 30.26224700714665}, {"type": "nauc_ndcg_at_3_max", "value": 38.639459899469855}, {"type": "nauc_ndcg_at_3_std", "value": 36.35415154738677}, {"type": "nauc_ndcg_at_5_diff1", "value": 36.43564587113643}, {"type": "nauc_ndcg_at_5_max", "value": 46.3557986365278}, {"type": "nauc_ndcg_at_5_std", "value": 43.88461405861497}, {"type": "nauc_precision_at_1000_diff1", "value": 19.248285775071935}, {"type": "nauc_precision_at_1000_max", "value": 54.75027201666528}, {"type": "nauc_precision_at_1000_std", "value": 57.85442302597637}, {"type": "nauc_precision_at_100_diff1", "value": 29.756268297368276}, {"type": "nauc_precision_at_100_max", "value": 64.30489557431851}, {"type": "nauc_precision_at_100_std", "value": 58.606646614493904}, {"type": "nauc_precision_at_10_diff1", "value": 34.10288051634421}, {"type": "nauc_precision_at_10_max", "value": 52.34153820407179}, {"type": "nauc_precision_at_10_std", "value": 56.999928724425644}, {"type": "nauc_precision_at_1_diff1", "value": 26.866938946939406}, {"type": "nauc_precision_at_1_max", "value": 53.65404374099094}, {"type": "nauc_precision_at_1_std", "value": 37.860934759045406}, {"type": "nauc_precision_at_20_diff1", "value": 33.79921393600524}, {"type": "nauc_precision_at_20_max", "value": 56.236094445972796}, {"type": "nauc_precision_at_20_std", "value": 57.15552085215475}, {"type": "nauc_precision_at_3_diff1", "value": 26.035425537108857}, {"type": "nauc_precision_at_3_max", "value": 45.56408327261248}, {"type": "nauc_precision_at_3_std", "value": 59.56195436648325}, {"type": "nauc_precision_at_5_diff1", "value": 34.84378104012192}, {"type": "nauc_precision_at_5_max", "value": 49.30041620262202}, {"type": "nauc_precision_at_5_std", "value": 56.6683934979334}, {"type": "nauc_recall_at_1000_diff1", "value": 30.51575548576755}, {"type": "nauc_recall_at_1000_max", "value": 43.64934411599405}, {"type": "nauc_recall_at_1000_std", "value": 56.84154990793133}, {"type": "nauc_recall_at_100_diff1", "value": 39.6998643462103}, {"type": "nauc_recall_at_100_max", "value": 44.8373934135145}, {"type": "nauc_recall_at_100_std", "value": 49.873151485862614}, {"type": "nauc_recall_at_10_diff1", "value": 24.733893615746922}, {"type": "nauc_recall_at_10_max", "value": 17.48036291557653}, {"type": "nauc_recall_at_10_std", "value": 26.533730432814185}, {"type": "nauc_recall_at_1_diff1", "value": 18.417203305727828}, {"type": "nauc_recall_at_1_max", "value": 10.348745827965553}, {"type": "nauc_recall_at_1_std", "value": 13.800647316428785}, {"type": "nauc_recall_at_20_diff1", "value": 30.64841793571244}, {"type": "nauc_recall_at_20_max", "value": 25.399231149100032}, {"type": "nauc_recall_at_20_std", "value": 36.03516872677545}, {"type": "nauc_recall_at_3_diff1", "value": 14.184010517448723}, {"type": "nauc_recall_at_3_max", "value": 3.9055370774988845}, {"type": "nauc_recall_at_3_std", "value": 26.09707135236969}, {"type": "nauc_recall_at_5_diff1", "value": 25.775613267290566}, {"type": "nauc_recall_at_5_max", "value": 13.674868148818057}, {"type": "nauc_recall_at_5_std", "value": 34.391050366605185}, {"type": "ndcg_at_1", "value": 30.232999999999997}, {"type": "ndcg_at_10", "value": 24.82}, {"type": "ndcg_at_100", "value": 23.547}, {"type": "ndcg_at_1000", "value": 30.558000000000003}, {"type": "ndcg_at_20", "value": 24.204}, {"type": "ndcg_at_3", "value": 27.322000000000003}, {"type": "ndcg_at_5", "value": 25.058000000000003}, {"type": "precision_at_1", "value": 44.186}, {"type": "precision_at_10", "value": 32.791}, {"type": "precision_at_100", "value": 14.860000000000001}, {"type": "precision_at_1000", "value": 3.2840000000000003}, {"type": "precision_at_20", "value": 28.255999999999997}, {"type": "precision_at_3", "value": 40.31}, {"type": "precision_at_5", "value": 35.349000000000004}, {"type": "recall_at_1", "value": 0.735}, {"type": "recall_at_10", "value": 5.367}, {"type": "recall_at_100", "value": 19.198999999999998}, {"type": "recall_at_1000", "value": 39.997}, {"type": "recall_at_20", "value": 8.486}, {"type": "recall_at_3", "value": 2.092}, {"type": "recall_at_5", "value": 2.758}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 85.46739626082991}, {"type": "f1", "value": 84.68203526638132}, {"type": "f1_weighted", "value": 85.61284249538359}, {"type": "main_score", "value": 85.46739626082991}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 65.46511627906978}, {"type": "f1", "value": 47.640541375476545}, {"type": "f1_weighted", "value": 69.33504477285032}, {"type": "main_score", "value": 65.46511627906978}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 62.96570275722932}, {"type": "f1", "value": 61.06806674831273}, {"type": "f1_weighted", "value": 63.23826864499515}, {"type": "main_score", "value": 62.96570275722932}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 67.0611970410222}, {"type": "f1", "value": 65.86938657402365}, {"type": "f1_weighted", "value": 67.16694460005834}, {"type": "main_score", "value": 67.0611970410222}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P (default)", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "main_score", "value": 24.46702077642377}, {"type": "v_measure", "value": 24.46702077642377}, {"type": "v_measure_std", "value": 1.4535352745525076}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S (default)", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "main_score", "value": 19.382712347812014}, {"type": "v_measure", "value": 19.382712347812014}, {"type": "v_measure_std", "value": 1.5234944494227807}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking (default)", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "59042f120c80e8afa9cdbb224f67076cec0fc9a7"}, "metrics": [{"type": "main_score", "value": 27.029080895625512}, {"type": "map", "value": 27.029080895625512}, {"type": "mrr", "value": 27.331766237183647}, {"type": "nAUC_map_diff1", "value": 13.215659465363643}, {"type": "nAUC_map_max", "value": -31.94716011694344}, {"type": "nAUC_map_std", "value": -19.2078629337707}, {"type": "nAUC_mrr_diff1", "value": 12.88388012914082}, {"type": "nAUC_mrr_max", "value": -25.759798374458892}, {"type": "nAUC_mrr_std", "value": -15.737741045947908}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus (default)", "type": "mteb/nfcorpus", "config": "default", "split": "test", "revision": "ec0fa4fe99da2ff19ca1214b7966684033a58814"}, "metrics": [{"type": "main_score", "value": 20.195}, {"type": "map_at_1", "value": 2.945}, {"type": "map_at_10", "value": 6.221}, {"type": "map_at_100", "value": 7.890999999999999}, {"type": "map_at_1000", "value": 8.904}, {"type": "map_at_20", "value": 6.827}, {"type": "map_at_3", "value": 4.744000000000001}, {"type": "map_at_5", "value": 5.469}, {"type": "mrr_at_1", "value": 26.93498452012384}, {"type": "mrr_at_10", "value": 36.720723377070115}, {"type": "mrr_at_100", "value": 37.522499628403175}, {"type": "mrr_at_1000", "value": 37.59191125990685}, {"type": "mrr_at_20", "value": 37.15237674507148}, {"type": "mrr_at_3", "value": 34.52012383900927}, {"type": "mrr_at_5", "value": 35.89783281733745}, {"type": "nauc_map_at_1000_diff1", "value": 27.79611013865157}, {"type": "nauc_map_at_1000_max", "value": 25.115487651379688}, {"type": "nauc_map_at_1000_std", "value": 22.447938705331307}, {"type": "nauc_map_at_100_diff1", "value": 29.07484696328452}, {"type": "nauc_map_at_100_max", "value": 25.550609852486843}, {"type": "nauc_map_at_100_std", "value": 18.748849434206015}, {"type": "nauc_map_at_10_diff1", "value": 32.13313134621841}, {"type": "nauc_map_at_10_max", "value": 23.42371259213}, {"type": "nauc_map_at_10_std", "value": 13.132452436446055}, {"type": "nauc_map_at_1_diff1", "value": 48.99604418795718}, {"type": "nauc_map_at_1_max", "value": 20.94493652063089}, {"type": "nauc_map_at_1_std", "value": 6.27622130781943}, {"type": "nauc_map_at_20_diff1", "value": 30.314627824479988}, {"type": "nauc_map_at_20_max", "value": 24.919313882172776}, {"type": "nauc_map_at_20_std", "value": 15.47713557515385}, {"type": "nauc_map_at_3_diff1", "value": 37.36695377200037}, {"type": "nauc_map_at_3_max", "value": 20.859250031555383}, {"type": "nauc_map_at_3_std", "value": 7.487974603438007}, {"type": "nauc_map_at_5_diff1", "value": 35.62996317877479}, {"type": "nauc_map_at_5_max", "value": 22.246030893552174}, {"type": "nauc_map_at_5_std", "value": 11.234461088832076}, {"type": "nauc_mrr_at_1000_diff1", "value": 27.787634466790905}, {"type": "nauc_mrr_at_1000_max", "value": 26.154081073396874}, {"type": "nauc_mrr_at_1000_std", "value": 21.49803908031959}, {"type": "nauc_mrr_at_100_diff1", "value": 27.775944068106096}, {"type": "nauc_mrr_at_100_max", "value": 26.18134621303553}, {"type": "nauc_mrr_at_100_std", "value": 21.49112111683465}, {"type": "nauc_mrr_at_10_diff1", "value": 27.66049246199066}, {"type": "nauc_mrr_at_10_max", "value": 25.953362613367513}, {"type": "nauc_mrr_at_10_std", "value": 21.9159895988671}, {"type": "nauc_mrr_at_1_diff1", "value": 30.047040224446768}, {"type": "nauc_mrr_at_1_max", "value": 23.814650508147956}, {"type": "nauc_mrr_at_1_std", "value": 13.975913248930718}, {"type": "nauc_mrr_at_20_diff1", "value": 27.738165039507905}, {"type": "nauc_mrr_at_20_max", "value": 26.175963126916358}, {"type": "nauc_mrr_at_20_std", "value": 21.6368886583229}, {"type": "nauc_mrr_at_3_diff1", "value": 28.920621268944362}, {"type": "nauc_mrr_at_3_max", "value": 25.14792614833204}, {"type": "nauc_mrr_at_3_std", "value": 21.63716383788851}, {"type": "nauc_mrr_at_5_diff1", "value": 28.29638825898445}, {"type": "nauc_mrr_at_5_max", "value": 25.207905032193434}, {"type": "nauc_mrr_at_5_std", "value": 21.620001994525204}, {"type": "nauc_ndcg_at_1000_diff1", "value": 23.676678242133264}, {"type": "nauc_ndcg_at_1000_max", "value": 29.40819281328086}, {"type": "nauc_ndcg_at_1000_std", "value": 27.48922266163637}, {"type": "nauc_ndcg_at_100_diff1", "value": 24.068151236413946}, {"type": "nauc_ndcg_at_100_max", "value": 26.195824476280627}, {"type": "nauc_ndcg_at_100_std", "value": 26.21807375892809}, {"type": "nauc_ndcg_at_10_diff1", "value": 21.36507362084362}, {"type": "nauc_ndcg_at_10_max", "value": 21.88154065329857}, {"type": "nauc_ndcg_at_10_std", "value": 30.590021666432776}, {"type": "nauc_ndcg_at_1_diff1", "value": 29.5481445398632}, {"type": "nauc_ndcg_at_1_max", "value": 21.28363101652307}, {"type": "nauc_ndcg_at_1_std", "value": 16.267359871177767}, {"type": "nauc_ndcg_at_20_diff1", "value": 22.786374147311257}, {"type": "nauc_ndcg_at_20_max", "value": 23.71430035323994}, {"type": "nauc_ndcg_at_20_std", "value": 30.948437670908152}, {"type": "nauc_ndcg_at_3_diff1", "value": 22.73384684789295}, {"type": "nauc_ndcg_at_3_max", "value": 23.884749210882312}, {"type": "nauc_ndcg_at_3_std", "value": 27.914342072137188}, {"type": "nauc_ndcg_at_5_diff1", "value": 20.747332983786713}, {"type": "nauc_ndcg_at_5_max", "value": 21.92441825265579}, {"type": "nauc_ndcg_at_5_std", "value": 29.75514293433641}, {"type": "nauc_precision_at_1000_diff1", "value": -1.65536586785613}, {"type": "nauc_precision_at_1000_max", "value": -1.3001979301423146}, {"type": "nauc_precision_at_1000_std", "value": 43.228651563159566}, {"type": "nauc_precision_at_100_diff1", "value": 3.1345908963206797}, {"type": "nauc_precision_at_100_max", "value": 7.571791761705496}, {"type": "nauc_precision_at_100_std", "value": 44.15229657763602}, {"type": "nauc_precision_at_10_diff1", "value": 7.683752240473546}, {"type": "nauc_precision_at_10_max", "value": 19.5029803917141}, {"type": "nauc_precision_at_10_std", "value": 38.62282334783876}, {"type": "nauc_precision_at_1_diff1", "value": 30.047040224446768}, {"type": "nauc_precision_at_1_max", "value": 23.814650508147956}, {"type": "nauc_precision_at_1_std", "value": 13.975913248930718}, {"type": "nauc_precision_at_20_diff1", "value": 6.930295774089716}, {"type": "nauc_precision_at_20_max", "value": 18.751959496812546}, {"type": "nauc_precision_at_20_std", "value": 43.43876310805847}, {"type": "nauc_precision_at_3_diff1", "value": 15.645692055073493}, {"type": "nauc_precision_at_3_max", "value": 27.07516284809194}, {"type": "nauc_precision_at_3_std", "value": 32.791468901313635}, {"type": "nauc_precision_at_5_diff1", "value": 9.551833738631395}, {"type": "nauc_precision_at_5_max", "value": 22.195462858158265}, {"type": "nauc_precision_at_5_std", "value": 35.86235073052298}, {"type": "nauc_recall_at_1000_diff1", "value": 17.10758113070841}, {"type": "nauc_recall_at_1000_max", "value": 14.409721865645015}, {"type": "nauc_recall_at_1000_std", "value": 7.175910246747222}, {"type": "nauc_recall_at_100_diff1", "value": 16.98022992881172}, {"type": "nauc_recall_at_100_max", "value": 15.144596632597517}, {"type": "nauc_recall_at_100_std", "value": 5.807717340611582}, {"type": "nauc_recall_at_10_diff1", "value": 23.064192726886542}, {"type": "nauc_recall_at_10_max", "value": 16.546409463109317}, {"type": "nauc_recall_at_10_std", "value": 10.660303125291867}, {"type": "nauc_recall_at_1_diff1", "value": 48.99604418795718}, {"type": "nauc_recall_at_1_max", "value": 20.94493652063089}, {"type": "nauc_recall_at_1_std", "value": 6.27622130781943}, {"type": "nauc_recall_at_20_diff1", "value": 19.152047964875734}, {"type": "nauc_recall_at_20_max", "value": 20.717660137504122}, {"type": "nauc_recall_at_20_std", "value": 10.52999190542913}, {"type": "nauc_recall_at_3_diff1", "value": 28.737014872630745}, {"type": "nauc_recall_at_3_max", "value": 13.65850690480607}, {"type": "nauc_recall_at_3_std", "value": 5.427736667755079}, {"type": "nauc_recall_at_5_diff1", "value": 26.4019394342239}, {"type": "nauc_recall_at_5_max", "value": 15.134299251730008}, {"type": "nauc_recall_at_5_std", "value": 9.576300445195523}, {"type": "ndcg_at_1", "value": 25.386999999999997}, {"type": "ndcg_at_10", "value": 20.195}, {"type": "ndcg_at_100", "value": 19.337}, {"type": "ndcg_at_1000", "value": 28.089}, {"type": "ndcg_at_20", "value": 18.741}, {"type": "ndcg_at_3", "value": 23.221}, {"type": "ndcg_at_5", "value": 22.076}, {"type": "precision_at_1", "value": 26.935}, {"type": "precision_at_10", "value": 15.076999999999998}, {"type": "precision_at_100", "value": 5.492}, {"type": "precision_at_1000", "value": 1.779}, {"type": "precision_at_20", "value": 11.315999999999999}, {"type": "precision_at_3", "value": 22.291}, {"type": "precision_at_5", "value": 19.442999999999998}, {"type": "recall_at_1", "value": 2.945}, {"type": "recall_at_10", "value": 9.578000000000001}, {"type": "recall_at_100", "value": 21.876}, {"type": "recall_at_1000", "value": 52.305}, {"type": "recall_at_20", "value": 12.041}, {"type": "recall_at_3", "value": 5.892}, {"type": "recall_at_5", "value": 7.553}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ (default)", "type": "mteb/nq", "config": "default", "split": "test", "revision": "b774495ed302d8c44a3a7ea25c90dbce03968f31"}, "metrics": [{"type": "main_score", "value": 16.625999999999998}, {"type": "map_at_1", "value": 7.602}, {"type": "map_at_10", "value": 13.062999999999999}, {"type": "map_at_100", "value": 13.987}, {"type": "map_at_1000", "value": 14.079}, {"type": "map_at_20", "value": 13.571}, {"type": "map_at_3", "value": 11.176}, {"type": "map_at_5", "value": 12.106}, {"type": "mrr_at_1", "value": 8.603707995365006}, {"type": "mrr_at_10", "value": 14.42836221008294}, {"type": "mrr_at_100", "value": 15.327619422421954}, {"type": "mrr_at_1000", "value": 15.409036789424253}, {"type": "mrr_at_20", "value": 14.93547472380801}, {"type": "mrr_at_3", "value": 12.47103128621087}, {"type": "mrr_at_5", "value": 13.469003476245614}, {"type": "nauc_map_at_1000_diff1", "value": 22.53140441222631}, {"type": "nauc_map_at_1000_max", "value": 9.698467434337164}, {"type": "nauc_map_at_1000_std", "value": 11.345286586567711}, {"type": "nauc_map_at_100_diff1", "value": 22.522109671269725}, {"type": "nauc_map_at_100_max", "value": 9.676164796732111}, {"type": "nauc_map_at_100_std", "value": 11.248118748580213}, {"type": "nauc_map_at_10_diff1", "value": 22.491463480004217}, {"type": "nauc_map_at_10_max", "value": 9.573423511496964}, {"type": "nauc_map_at_10_std", "value": 9.979232730710939}, {"type": "nauc_map_at_1_diff1", "value": 25.732381756890373}, {"type": "nauc_map_at_1_max", "value": 10.84116623858562}, {"type": "nauc_map_at_1_std", "value": 7.338100936490713}, {"type": "nauc_map_at_20_diff1", "value": 22.53923355716687}, {"type": "nauc_map_at_20_max", "value": 9.652116335078432}, {"type": "nauc_map_at_20_std", "value": 10.670127330165213}, {"type": "nauc_map_at_3_diff1", "value": 22.618120346879962}, {"type": "nauc_map_at_3_max", "value": 9.086786085358039}, {"type": "nauc_map_at_3_std", "value": 8.20995647201015}, {"type": "nauc_map_at_5_diff1", "value": 22.64115159473954}, {"type": "nauc_map_at_5_max", "value": 9.194704682395841}, {"type": "nauc_map_at_5_std", "value": 8.810417562333175}, {"type": "nauc_mrr_at_1000_diff1", "value": 22.344805318852927}, {"type": "nauc_mrr_at_1000_max", "value": 9.163892702470772}, {"type": "nauc_mrr_at_1000_std", "value": 11.351012340897705}, {"type": "nauc_mrr_at_100_diff1", "value": 22.336047255436036}, {"type": "nauc_mrr_at_100_max", "value": 9.145224907428604}, {"type": "nauc_mrr_at_100_std", "value": 11.28358245102265}, {"type": "nauc_mrr_at_10_diff1", "value": 22.25037742257287}, {"type": "nauc_mrr_at_10_max", "value": 8.95546839387158}, {"type": "nauc_mrr_at_10_std", "value": 10.271673610986973}, {"type": "nauc_mrr_at_1_diff1", "value": 25.910389767357838}, {"type": "nauc_mrr_at_1_max", "value": 10.043907328097326}, {"type": "nauc_mrr_at_1_std", "value": 7.5411580653545665}, {"type": "nauc_mrr_at_20_diff1", "value": 22.330127161074522}, {"type": "nauc_mrr_at_20_max", "value": 9.103315674717512}, {"type": "nauc_mrr_at_20_std", "value": 10.85259680963488}, {"type": "nauc_mrr_at_3_diff1", "value": 22.85678641354908}, {"type": "nauc_mrr_at_3_max", "value": 8.525432350871027}, {"type": "nauc_mrr_at_3_std", "value": 8.877916382224424}, {"type": "nauc_mrr_at_5_diff1", "value": 22.5016422227308}, {"type": "nauc_mrr_at_5_max", "value": 8.71305219879408}, {"type": "nauc_mrr_at_5_std", "value": 9.480157566645657}, {"type": "nauc_ndcg_at_1000_diff1", "value": 21.615613945394724}, {"type": "nauc_ndcg_at_1000_max", "value": 10.140787968124906}, {"type": "nauc_ndcg_at_1000_std", "value": 18.91156900295804}, {"type": "nauc_ndcg_at_100_diff1", "value": 21.449543597580128}, {"type": "nauc_ndcg_at_100_max", "value": 9.764472700567374}, {"type": "nauc_ndcg_at_100_std", "value": 17.00068045706022}, {"type": "nauc_ndcg_at_10_diff1", "value": 21.467825825652994}, {"type": "nauc_ndcg_at_10_max", "value": 9.433691829219262}, {"type": "nauc_ndcg_at_10_std", "value": 11.645671336911704}, {"type": "nauc_ndcg_at_1_diff1", "value": 26.192032369383917}, {"type": "nauc_ndcg_at_1_max", "value": 9.968495759668212}, {"type": "nauc_ndcg_at_1_std", "value": 7.7353705558822625}, {"type": "nauc_ndcg_at_20_diff1", "value": 21.559602636114114}, {"type": "nauc_ndcg_at_20_max", "value": 9.611322723657722}, {"type": "nauc_ndcg_at_20_std", "value": 13.45124376661578}, {"type": "nauc_ndcg_at_3_diff1", "value": 21.942611052570136}, {"type": "nauc_ndcg_at_3_max", "value": 8.546943480026158}, {"type": "nauc_ndcg_at_3_std", "value": 8.558826963066005}, {"type": "nauc_ndcg_at_5_diff1", "value": 21.81661495292013}, {"type": "nauc_ndcg_at_5_max", "value": 8.814628270505972}, {"type": "nauc_ndcg_at_5_std", "value": 9.553325054391859}, {"type": "nauc_precision_at_1000_diff1", "value": 12.97521395897297}, {"type": "nauc_precision_at_1000_max", "value": 8.478693692040677}, {"type": "nauc_precision_at_1000_std", "value": 36.05577365548163}, {"type": "nauc_precision_at_100_diff1", "value": 16.657230982970816}, {"type": "nauc_precision_at_100_max", "value": 8.209079564335859}, {"type": "nauc_precision_at_100_std", "value": 30.27783657644826}, {"type": "nauc_precision_at_10_diff1", "value": 19.33963457061645}, {"type": "nauc_precision_at_10_max", "value": 8.214228079850216}, {"type": "nauc_precision_at_10_std", "value": 15.301384981024956}, {"type": "nauc_precision_at_1_diff1", "value": 26.192032369383917}, {"type": "nauc_precision_at_1_max", "value": 9.968495759668212}, {"type": "nauc_precision_at_1_std", "value": 7.7353705558822625}, {"type": "nauc_precision_at_20_diff1", "value": 19.43023023951576}, {"type": "nauc_precision_at_20_max", "value": 8.721068295460837}, {"type": "nauc_precision_at_20_std", "value": 19.87359595692818}, {"type": "nauc_precision_at_3_diff1", "value": 20.98520268342122}, {"type": "nauc_precision_at_3_max", "value": 6.997024310982154}, {"type": "nauc_precision_at_3_std", "value": 9.277070159437823}, {"type": "nauc_precision_at_5_diff1", "value": 20.628506750684906}, {"type": "nauc_precision_at_5_max", "value": 7.3222879491405966}, {"type": "nauc_precision_at_5_std", "value": 11.105451869396907}, {"type": "nauc_recall_at_1000_diff1", "value": 18.99843982673276}, {"type": "nauc_recall_at_1000_max", "value": 12.09746379039881}, {"type": "nauc_recall_at_1000_std", "value": 47.137305858569114}, {"type": "nauc_recall_at_100_diff1", "value": 18.68777092563649}, {"type": "nauc_recall_at_100_max", "value": 10.03569790422345}, {"type": "nauc_recall_at_100_std", "value": 30.950722423284187}, {"type": "nauc_recall_at_10_diff1", "value": 19.128896089153272}, {"type": "nauc_recall_at_10_max", "value": 9.482016845402566}, {"type": "nauc_recall_at_10_std", "value": 14.412901077358026}, {"type": "nauc_recall_at_1_diff1", "value": 25.732381756890373}, {"type": "nauc_recall_at_1_max", "value": 10.84116623858562}, {"type": "nauc_recall_at_1_std", "value": 7.338100936490713}, {"type": "nauc_recall_at_20_diff1", "value": 19.030846984141448}, {"type": "nauc_recall_at_20_max", "value": 9.542717362815113}, {"type": "nauc_recall_at_20_std", "value": 18.266090149714877}, {"type": "nauc_recall_at_3_diff1", "value": 19.7871264081636}, {"type": "nauc_recall_at_3_max", "value": 7.737796420966328}, {"type": "nauc_recall_at_3_std", "value": 8.666865785409758}, {"type": "nauc_recall_at_5_diff1", "value": 19.892123057309906}, {"type": "nauc_recall_at_5_max", "value": 8.298969098777071}, {"type": "nauc_recall_at_5_std", "value": 10.420267011108276}, {"type": "ndcg_at_1", "value": 8.575000000000001}, {"type": "ndcg_at_10", "value": 16.625999999999998}, {"type": "ndcg_at_100", "value": 21.397}, {"type": "ndcg_at_1000", "value": 24.018}, {"type": "ndcg_at_20", "value": 18.421000000000003}, {"type": "ndcg_at_3", "value": 12.658}, {"type": "ndcg_at_5", "value": 14.338999999999999}, {"type": "precision_at_1", "value": 8.575000000000001}, {"type": "precision_at_10", "value": 3.1}, {"type": "precision_at_100", "value": 0.581}, {"type": "precision_at_1000", "value": 0.083}, {"type": "precision_at_20", "value": 1.957}, {"type": "precision_at_3", "value": 6.064}, {"type": "precision_at_5", "value": 4.577}, {"type": "recall_at_1", "value": 7.602}, {"type": "recall_at_10", "value": 26.400000000000002}, {"type": "recall_at_100", "value": 48.634}, {"type": "recall_at_1000", "value": 68.90899999999999}, {"type": "recall_at_20", "value": 33.164}, {"type": "recall_at_3", "value": 15.679000000000002}, {"type": "recall_at_5", "value": 19.602}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval (default)", "type": "mteb/quora", "config": "default", "split": "test", "revision": "e4e08e0b7dbe3c8700f0daef558ff32256715259"}, "metrics": [{"type": "main_score", "value": 72.21600000000001}, {"type": "map_at_1", "value": 55.492}, {"type": "map_at_10", "value": 67.14}, {"type": "map_at_100", "value": 67.946}, {"type": "map_at_1000", "value": 67.989}, {"type": "map_at_20", "value": 67.62899999999999}, {"type": "map_at_3", "value": 64.279}, {"type": "map_at_5", "value": 65.967}, {"type": "mrr_at_1", "value": 63.88}, {"type": "mrr_at_10", "value": 71.93389682539649}, {"type": "mrr_at_100", "value": 72.28910372486355}, {"type": "mrr_at_1000", "value": 72.3024887760509}, {"type": "mrr_at_20", "value": 72.16798297677506}, {"type": "mrr_at_3", "value": 70.188333333333}, {"type": "mrr_at_5", "value": 71.26733333333279}, {"type": "nauc_map_at_1000_diff1", "value": 68.1515295383326}, {"type": "nauc_map_at_1000_max", "value": 37.54695769101746}, {"type": "nauc_map_at_1000_std", "value": -12.805904377344}, {"type": "nauc_map_at_100_diff1", "value": 68.14399424614948}, {"type": "nauc_map_at_100_max", "value": 37.54328801779939}, {"type": "nauc_map_at_100_std", "value": -12.83845975657647}, {"type": "nauc_map_at_10_diff1", "value": 68.04097237081037}, {"type": "nauc_map_at_10_max", "value": 37.19790649304174}, {"type": "nauc_map_at_10_std", "value": -13.574656560807451}, {"type": "nauc_map_at_1_diff1", "value": 70.06050188284856}, {"type": "nauc_map_at_1_max", "value": 32.7950423160114}, {"type": "nauc_map_at_1_std", "value": -15.96831844096167}, {"type": "nauc_map_at_20_diff1", "value": 68.09197231492732}, {"type": "nauc_map_at_20_max", "value": 37.385624168302684}, {"type": "nauc_map_at_20_std", "value": -13.155476799236565}, {"type": "nauc_map_at_3_diff1", "value": 68.23134276838651}, {"type": "nauc_map_at_3_max", "value": 36.23832837393925}, {"type": "nauc_map_at_3_std", "value": -15.423833858804532}, {"type": "nauc_map_at_5_diff1", "value": 67.95900982506224}, {"type": "nauc_map_at_5_max", "value": 36.53132827026241}, {"type": "nauc_map_at_5_std", "value": -14.482907430203696}, {"type": "nauc_mrr_at_1000_diff1", "value": 69.62457918048828}, {"type": "nauc_mrr_at_1000_max", "value": 40.07844145179273}, {"type": "nauc_mrr_at_1000_std", "value": -10.644864923349227}, {"type": "nauc_mrr_at_100_diff1", "value": 69.62059876593055}, {"type": "nauc_mrr_at_100_max", "value": 40.07904892244788}, {"type": "nauc_mrr_at_100_std", "value": -10.637692251883314}, {"type": "nauc_mrr_at_10_diff1", "value": 69.52502303386919}, {"type": "nauc_mrr_at_10_max", "value": 40.10809003322649}, {"type": "nauc_mrr_at_10_std", "value": -10.684922661530145}, {"type": "nauc_mrr_at_1_diff1", "value": 72.0826342696167}, {"type": "nauc_mrr_at_1_max", "value": 39.8840674644011}, {"type": "nauc_mrr_at_1_std", "value": -12.897908766689145}, {"type": "nauc_mrr_at_20_diff1", "value": 69.58190352660375}, {"type": "nauc_mrr_at_20_max", "value": 40.0783519699091}, {"type": "nauc_mrr_at_20_std", "value": -10.629858366175634}, {"type": "nauc_mrr_at_3_diff1", "value": 69.46685839511639}, {"type": "nauc_mrr_at_3_max", "value": 39.98286553507212}, {"type": "nauc_mrr_at_3_std", "value": -11.679166167876408}, {"type": "nauc_mrr_at_5_diff1", "value": 69.44350507999052}, {"type": "nauc_mrr_at_5_max", "value": 39.91668797347604}, {"type": "nauc_mrr_at_5_std", "value": -11.060504498483011}, {"type": "nauc_ndcg_at_1000_diff1", "value": 68.08522323983172}, {"type": "nauc_ndcg_at_1000_max", "value": 38.78930800558068}, {"type": "nauc_ndcg_at_1000_std", "value": -9.380187466388266}, {"type": "nauc_ndcg_at_100_diff1", "value": 67.89445682736151}, {"type": "nauc_ndcg_at_100_max", "value": 38.76088209944818}, {"type": "nauc_ndcg_at_100_std", "value": -9.332407536563391}, {"type": "nauc_ndcg_at_10_diff1", "value": 67.32980612110863}, {"type": "nauc_ndcg_at_10_max", "value": 38.20460047799145}, {"type": "nauc_ndcg_at_10_std", "value": -11.08956339625659}, {"type": "nauc_ndcg_at_1_diff1", "value": 72.02112312263394}, {"type": "nauc_ndcg_at_1_max", "value": 39.88906073001357}, {"type": "nauc_ndcg_at_1_std", "value": -12.890119245130952}, {"type": "nauc_ndcg_at_20_diff1", "value": 67.57306809180233}, {"type": "nauc_ndcg_at_20_max", "value": 38.344690097960274}, {"type": "nauc_ndcg_at_20_std", "value": -10.361436571647312}, {"type": "nauc_ndcg_at_3_diff1", "value": 67.3468184467274}, {"type": "nauc_ndcg_at_3_max", "value": 37.71021875036499}, {"type": "nauc_ndcg_at_3_std", "value": -13.237678612410885}, {"type": "nauc_ndcg_at_5_diff1", "value": 67.09372417578471}, {"type": "nauc_ndcg_at_5_max", "value": 37.36947760591302}, {"type": "nauc_ndcg_at_5_std", "value": -12.359281253686154}, {"type": "nauc_precision_at_1000_diff1", "value": -25.80880508117984}, {"type": "nauc_precision_at_1000_max", "value": -1.6580485832415812}, {"type": "nauc_precision_at_1000_std", "value": 25.560507838753338}, {"type": "nauc_precision_at_100_diff1", "value": -19.632955996672276}, {"type": "nauc_precision_at_100_max", "value": 4.1425708066313245}, {"type": "nauc_precision_at_100_std", "value": 24.887433001522812}, {"type": "nauc_precision_at_10_diff1", "value": 0.031433531700304586}, {"type": "nauc_precision_at_10_max", "value": 16.22983398823126}, {"type": "nauc_precision_at_10_std", "value": 14.947957320162104}, {"type": "nauc_precision_at_1_diff1", "value": 72.02112312263394}, {"type": "nauc_precision_at_1_max", "value": 39.88906073001357}, {"type": "nauc_precision_at_1_std", "value": -12.890119245130952}, {"type": "nauc_precision_at_20_diff1", "value": -8.42349247517796}, {"type": "nauc_precision_at_20_max", "value": 11.629391098316281}, {"type": "nauc_precision_at_20_std", "value": 19.70570016448179}, {"type": "nauc_precision_at_3_diff1", "value": 26.589087035439178}, {"type": "nauc_precision_at_3_max", "value": 26.505762530737552}, {"type": "nauc_precision_at_3_std", "value": 0.5240934369043053}, {"type": "nauc_precision_at_5_diff1", "value": 13.15610105266614}, {"type": "nauc_precision_at_5_max", "value": 20.826382527147143}, {"type": "nauc_precision_at_5_std", "value": 7.122292405395125}, {"type": "nauc_recall_at_1000_diff1", "value": 60.43306391136544}, {"type": "nauc_recall_at_1000_max", "value": 39.04457194993258}, {"type": "nauc_recall_at_1000_std", "value": 48.10736929427494}, {"type": "nauc_recall_at_100_diff1", "value": 57.50786726720306}, {"type": "nauc_recall_at_100_max", "value": 36.96036359746097}, {"type": "nauc_recall_at_100_std", "value": 17.43991349542963}, {"type": "nauc_recall_at_10_diff1", "value": 59.55969023354459}, {"type": "nauc_recall_at_10_max", "value": 35.77811946712584}, {"type": "nauc_recall_at_10_std", "value": -5.308872679370513}, {"type": "nauc_recall_at_1_diff1", "value": 70.06050188284856}, {"type": "nauc_recall_at_1_max", "value": 32.7950423160114}, {"type": "nauc_recall_at_1_std", "value": -15.96831844096167}, {"type": "nauc_recall_at_20_diff1", "value": 59.36231963477334}, {"type": "nauc_recall_at_20_max", "value": 35.74083065802152}, {"type": "nauc_recall_at_20_std", "value": 0.10479581364017149}, {"type": "nauc_recall_at_3_diff1", "value": 62.98815867302664}, {"type": "nauc_recall_at_3_max", "value": 34.35053076037745}, {"type": "nauc_recall_at_3_std", "value": -14.653656328573552}, {"type": "nauc_recall_at_5_diff1", "value": 60.83589763557177}, {"type": "nauc_recall_at_5_max", "value": 33.67596252960681}, {"type": "nauc_recall_at_5_std", "value": -10.962319835832373}, {"type": "ndcg_at_1", "value": 63.91}, {"type": "ndcg_at_10", "value": 72.21600000000001}, {"type": "ndcg_at_100", "value": 74.895}, {"type": "ndcg_at_1000", "value": 75.606}, {"type": "ndcg_at_20", "value": 73.41499999999999}, {"type": "ndcg_at_3", "value": 68.27600000000001}, {"type": "ndcg_at_5", "value": 70.243}, {"type": "precision_at_1", "value": 63.91}, {"type": "precision_at_10", "value": 10.947999999999999}, {"type": "precision_at_100", "value": 1.351}, {"type": "precision_at_1000", "value": 0.148}, {"type": "precision_at_20", "value": 5.94}, {"type": "precision_at_3", "value": 29.473}, {"type": "precision_at_5", "value": 19.6}, {"type": "recall_at_1", "value": 55.492}, {"type": "recall_at_10", "value": 82.191}, {"type": "recall_at_100", "value": 93.01700000000001}, {"type": "recall_at_1000", "value": 97.548}, {"type": "recall_at_20", "value": 86.37899999999999}, {"type": "recall_at_3", "value": 70.977}, {"type": "recall_at_5", "value": 76.347}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering (default)", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "main_score", "value": 20.032944959465006}, {"type": "v_measure", "value": 20.032944959465006}, {"type": "v_measure_std", "value": 3.977494953209651}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P (default)", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "main_score", "value": 35.59444866525171}, {"type": "v_measure", "value": 35.59444866525171}, {"type": "v_measure_std", "value": 9.825394674707393}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS (default)", "type": "mteb/scidocs", "config": "default", "split": "test", "revision": "f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88"}, "metrics": [{"type": "main_score", "value": 9.735000000000001}, {"type": "map_at_1", "value": 2.29}, {"type": "map_at_10", "value": 5.306}, {"type": "map_at_100", "value": 6.419}, {"type": "map_at_1000", "value": 6.643000000000001}, {"type": "map_at_20", "value": 5.821}, {"type": "map_at_3", "value": 3.9280000000000004}, {"type": "map_at_5", "value": 4.569}, {"type": "mrr_at_1", "value": 11.3}, {"type": "mrr_at_10", "value": 17.82924603174603}, {"type": "mrr_at_100", "value": 19.05406312305304}, {"type": "mrr_at_1000", "value": 19.166442675275487}, {"type": "mrr_at_20", "value": 18.541102740139127}, {"type": "mrr_at_3", "value": 15.5}, {"type": "mrr_at_5", "value": 16.770000000000003}, {"type": "nauc_map_at_1000_diff1", "value": 16.74948758074446}, {"type": "nauc_map_at_1000_max", "value": 16.272314062078568}, {"type": "nauc_map_at_1000_std", "value": 9.802528912458712}, {"type": "nauc_map_at_100_diff1", "value": 16.799538408566203}, {"type": "nauc_map_at_100_max", "value": 16.161996440884547}, {"type": "nauc_map_at_100_std", "value": 9.313374283200838}, {"type": "nauc_map_at_10_diff1", "value": 17.678328509818915}, {"type": "nauc_map_at_10_max", "value": 16.15778661911792}, {"type": "nauc_map_at_10_std", "value": 6.28185653746446}, {"type": "nauc_map_at_1_diff1", "value": 23.710070267313093}, {"type": "nauc_map_at_1_max", "value": 11.885861612261381}, {"type": "nauc_map_at_1_std", "value": 2.3279474317156885}, {"type": "nauc_map_at_20_diff1", "value": 17.40871167385732}, {"type": "nauc_map_at_20_max", "value": 16.071559259108398}, {"type": "nauc_map_at_20_std", "value": 7.768356041713391}, {"type": "nauc_map_at_3_diff1", "value": 19.657615469261334}, {"type": "nauc_map_at_3_max", "value": 13.47369716035662}, {"type": "nauc_map_at_3_std", "value": 4.1278803917212645}, {"type": "nauc_map_at_5_diff1", "value": 18.1480875610142}, {"type": "nauc_map_at_5_max", "value": 14.230776814076188}, {"type": "nauc_map_at_5_std", "value": 3.622953870263071}, {"type": "nauc_mrr_at_1000_diff1", "value": 15.571325709422576}, {"type": "nauc_mrr_at_1000_max", "value": 11.961320489344015}, {"type": "nauc_mrr_at_1000_std", "value": 4.450063785196112}, {"type": "nauc_mrr_at_100_diff1", "value": 15.536886637669223}, {"type": "nauc_mrr_at_100_max", "value": 11.935672425486493}, {"type": "nauc_mrr_at_100_std", "value": 4.447377015945805}, {"type": "nauc_mrr_at_10_diff1", "value": 15.53551580233096}, {"type": "nauc_mrr_at_10_max", "value": 12.042070930511581}, {"type": "nauc_mrr_at_10_std", "value": 3.93193017344515}, {"type": "nauc_mrr_at_1_diff1", "value": 23.57876428894364}, {"type": "nauc_mrr_at_1_max", "value": 12.12043352908189}, {"type": "nauc_mrr_at_1_std", "value": 2.7432795333657802}, {"type": "nauc_mrr_at_20_diff1", "value": 15.495682418058236}, {"type": "nauc_mrr_at_20_max", "value": 11.827749923147806}, {"type": "nauc_mrr_at_20_std", "value": 4.337028857179092}, {"type": "nauc_mrr_at_3_diff1", "value": 17.374795948438287}, {"type": "nauc_mrr_at_3_max", "value": 12.27547705847877}, {"type": "nauc_mrr_at_3_std", "value": 3.4490922395838437}, {"type": "nauc_mrr_at_5_diff1", "value": 15.960359036123982}, {"type": "nauc_mrr_at_5_max", "value": 11.88067300089711}, {"type": "nauc_mrr_at_5_std", "value": 3.392937900544711}, {"type": "nauc_ndcg_at_1000_diff1", "value": 12.368330070387364}, {"type": "nauc_ndcg_at_1000_max", "value": 15.835670856182855}, {"type": "nauc_ndcg_at_1000_std", "value": 15.767747221405982}, {"type": "nauc_ndcg_at_100_diff1", "value": 12.811381379776785}, {"type": "nauc_ndcg_at_100_max", "value": 14.882621585275865}, {"type": "nauc_ndcg_at_100_std", "value": 12.918687954717855}, {"type": "nauc_ndcg_at_10_diff1", "value": 14.52700036040429}, {"type": "nauc_ndcg_at_10_max", "value": 14.99153202568684}, {"type": "nauc_ndcg_at_10_std", "value": 7.059926007520434}, {"type": "nauc_ndcg_at_1_diff1", "value": 23.57876428894364}, {"type": "nauc_ndcg_at_1_max", "value": 12.12043352908189}, {"type": "nauc_ndcg_at_1_std", "value": 2.7432795333657802}, {"type": "nauc_ndcg_at_20_diff1", "value": 14.319122424506627}, {"type": "nauc_ndcg_at_20_max", "value": 14.419552299612848}, {"type": "nauc_ndcg_at_20_std", "value": 9.576470751691424}, {"type": "nauc_ndcg_at_3_diff1", "value": 17.65692103341824}, {"type": "nauc_ndcg_at_3_max", "value": 13.00851027147015}, {"type": "nauc_ndcg_at_3_std", "value": 4.543310351593028}, {"type": "nauc_ndcg_at_5_diff1", "value": 15.317354497478568}, {"type": "nauc_ndcg_at_5_max", "value": 13.129615291647509}, {"type": "nauc_ndcg_at_5_std", "value": 3.616970208712892}, {"type": "nauc_precision_at_1000_diff1", "value": 4.085961395572508}, {"type": "nauc_precision_at_1000_max", "value": 14.479584032109782}, {"type": "nauc_precision_at_1000_std", "value": 24.822216452687034}, {"type": "nauc_precision_at_100_diff1", "value": 6.64282379200282}, {"type": "nauc_precision_at_100_max", "value": 13.909179682772733}, {"type": "nauc_precision_at_100_std", "value": 19.78443717745781}, {"type": "nauc_precision_at_10_diff1", "value": 10.32762182507006}, {"type": "nauc_precision_at_10_max", "value": 15.932552899576436}, {"type": "nauc_precision_at_10_std", "value": 9.985036232997176}, {"type": "nauc_precision_at_1_diff1", "value": 23.57876428894364}, {"type": "nauc_precision_at_1_max", "value": 12.12043352908189}, {"type": "nauc_precision_at_1_std", "value": 2.7432795333657802}, {"type": "nauc_precision_at_20_diff1", "value": 10.198754689783367}, {"type": "nauc_precision_at_20_max", "value": 13.924027767491506}, {"type": "nauc_precision_at_20_std", "value": 14.451908759139476}, {"type": "nauc_precision_at_3_diff1", "value": 15.398209236569278}, {"type": "nauc_precision_at_3_max", "value": 12.545120019251204}, {"type": "nauc_precision_at_3_std", "value": 5.4174103114283865}, {"type": "nauc_precision_at_5_diff1", "value": 11.38440294831457}, {"type": "nauc_precision_at_5_max", "value": 12.929100768052521}, {"type": "nauc_precision_at_5_std", "value": 3.8826269454849633}, {"type": "nauc_recall_at_1000_diff1", "value": 3.9395149122426343}, {"type": "nauc_recall_at_1000_max", "value": 15.24096774447918}, {"type": "nauc_recall_at_1000_std", "value": 24.88975234530502}, {"type": "nauc_recall_at_100_diff1", "value": 6.580896935042611}, {"type": "nauc_recall_at_100_max", "value": 14.23980695602868}, {"type": "nauc_recall_at_100_std", "value": 19.479776494947476}, {"type": "nauc_recall_at_10_diff1", "value": 10.311801645360179}, {"type": "nauc_recall_at_10_max", "value": 15.658321964659283}, {"type": "nauc_recall_at_10_std", "value": 9.723932481966557}, {"type": "nauc_recall_at_1_diff1", "value": 23.710070267313093}, {"type": "nauc_recall_at_1_max", "value": 11.885861612261381}, {"type": "nauc_recall_at_1_std", "value": 2.3279474317156885}, {"type": "nauc_recall_at_20_diff1", "value": 10.24091592712925}, {"type": "nauc_recall_at_20_max", "value": 13.830865415282112}, {"type": "nauc_recall_at_20_std", "value": 14.04027736904607}, {"type": "nauc_recall_at_3_diff1", "value": 15.41219835513545}, {"type": "nauc_recall_at_3_max", "value": 12.333359628892342}, {"type": "nauc_recall_at_3_std", "value": 5.024648219558654}, {"type": "nauc_recall_at_5_diff1", "value": 11.349574574035458}, {"type": "nauc_recall_at_5_max", "value": 12.617927107492545}, {"type": "nauc_recall_at_5_std", "value": 3.5053356310188333}, {"type": "ndcg_at_1", "value": 11.3}, {"type": "ndcg_at_10", "value": 9.735000000000001}, {"type": "ndcg_at_100", "value": 15.195}, {"type": "ndcg_at_1000", "value": 20.23}, {"type": "ndcg_at_20", "value": 11.561}, {"type": "ndcg_at_3", "value": 9.144}, {"type": "ndcg_at_5", "value": 7.953}, {"type": "precision_at_1", "value": 11.3}, {"type": "precision_at_10", "value": 5.11}, {"type": "precision_at_100", "value": 1.3050000000000002}, {"type": "precision_at_1000", "value": 0.253}, {"type": "precision_at_20", "value": 3.615}, {"type": "precision_at_3", "value": 8.533}, {"type": "precision_at_5", "value": 6.98}, {"type": "recall_at_1", "value": 2.29}, {"type": "recall_at_10", "value": 10.362}, {"type": "recall_at_100", "value": 26.457000000000004}, {"type": "recall_at_1000", "value": 51.283}, {"type": "recall_at_20", "value": 14.649999999999999}, {"type": "recall_at_3", "value": 5.175}, {"type": "recall_at_5", "value": 7.0680000000000005}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R (default)", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cosine_pearson", "value": 72.23089156225602}, {"type": "cosine_spearman", "value": 64.63447730457894}, {"type": "euclidean_pearson", "value": 65.26536048964267}, {"type": "euclidean_spearman", "value": 60.05876325942518}, {"type": "main_score", "value": 64.63447730457894}, {"type": "manhattan_pearson", "value": 63.245519161378716}, {"type": "manhattan_spearman", "value": 59.28103411973211}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12 (default)", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cosine_pearson", "value": 63.108487890245115}, {"type": "cosine_spearman", "value": 58.06781798364534}, {"type": "euclidean_pearson", "value": 51.00455103977482}, {"type": "euclidean_spearman", "value": 47.056606990769154}, {"type": "main_score", "value": 58.06781798364534}, {"type": "manhattan_pearson", "value": 46.6691142816116}, {"type": "manhattan_spearman", "value": 43.82268675196447}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13 (default)", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cosine_pearson", "value": 67.9221550677534}, {"type": "cosine_spearman", "value": 68.7571596382501}, {"type": "euclidean_pearson", "value": 59.4362693562299}, {"type": "euclidean_spearman", "value": 59.90654031756741}, {"type": "main_score", "value": 68.7571596382501}, {"type": "manhattan_pearson", "value": 58.84015922334945}, {"type": "manhattan_spearman", "value": 58.764668284077416}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14 (default)", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cosine_pearson", "value": 66.96538071580031}, {"type": "cosine_spearman", "value": 65.42522405186078}, {"type": "euclidean_pearson", "value": 58.34297446892109}, {"type": "euclidean_spearman", "value": 57.95969868379801}, {"type": "main_score", "value": 65.42522405186078}, {"type": "manhattan_pearson", "value": 57.158803416050354}, {"type": "manhattan_spearman", "value": 56.70345912508504}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15 (default)", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cosine_pearson", "value": 74.37524523034543}, {"type": "cosine_spearman", "value": 75.08524309134856}, {"type": "euclidean_pearson", "value": 59.05421371900137}, {"type": "euclidean_spearman", "value": 60.8963245864918}, {"type": "main_score", "value": 75.08524309134856}, {"type": "manhattan_pearson", "value": 58.9258972492414}, {"type": "manhattan_spearman", "value": 60.102419570033106}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16 (default)", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cosine_pearson", "value": 63.085067266542495}, {"type": "cosine_spearman", "value": 65.38033636986424}, {"type": "euclidean_pearson", "value": 52.52293105293661}, {"type": "euclidean_spearman", "value": 54.599090360405086}, {"type": "main_score", "value": 65.38033636986424}, {"type": "manhattan_pearson", "value": 52.04583269035374}, {"type": "manhattan_spearman", "value": 53.418934610254134}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (nl-en)", "type": "mteb/sts17-crosslingual-sts", "config": "nl-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 23.019969311198167}, {"type": "cosine_spearman", "value": 17.411472418823667}, {"type": "euclidean_pearson", "value": -15.515358361955128}, {"type": "euclidean_spearman", "value": -15.677190499343482}, {"type": "main_score", "value": 17.411472418823667}, {"type": "manhattan_pearson", "value": -12.729052547730687}, {"type": "manhattan_spearman", "value": -12.288504263696268}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-de)", "type": "mteb/sts17-crosslingual-sts", "config": "en-de", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 21.269195172077147}, {"type": "cosine_spearman", "value": 18.575886451336775}, {"type": "euclidean_pearson", "value": -10.21009784982811}, {"type": "euclidean_spearman", "value": -12.92229729710694}, {"type": "main_score", "value": 18.575886451336775}, {"type": "manhattan_pearson", "value": -7.899161245683782}, {"type": "manhattan_spearman", "value": -10.894951447088}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "en-ar", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 4.556875032985485}, {"type": "cosine_spearman", "value": 2.0609547970913806}, {"type": "euclidean_pearson", "value": -11.715271322099575}, {"type": "euclidean_spearman", "value": -11.045818218942449}, {"type": "main_score", "value": 2.0609547970913806}, {"type": "manhattan_pearson", "value": -13.961076499664834}, {"type": "manhattan_spearman", "value": -13.632861374757931}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 77.65125036324002}, {"type": "cosine_spearman", "value": 78.69054832378838}, {"type": "euclidean_pearson", "value": 65.42262389971837}, {"type": "euclidean_spearman", "value": 66.17771023288537}, {"type": "main_score", "value": 78.69054832378838}, {"type": "manhattan_pearson", "value": 63.99535802918511}, {"type": "manhattan_spearman", "value": 64.5958799855611}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-en)", "type": "mteb/sts17-crosslingual-sts", "config": "es-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 11.304963266444723}, {"type": "cosine_spearman", "value": 9.07719919328374}, {"type": "euclidean_pearson", "value": -6.686339553470129}, {"type": "euclidean_spearman", "value": -13.741969244577302}, {"type": "main_score", "value": 9.07719919328374}, {"type": "manhattan_pearson", "value": -8.751096396459193}, {"type": "manhattan_spearman", "value": -15.472834128866678}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-tr)", "type": "mteb/sts17-crosslingual-sts", "config": "en-tr", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": -0.9487180608661593}, {"type": "cosine_spearman", "value": -3.5467049032356264}, {"type": "euclidean_pearson", "value": -22.379136687351238}, {"type": "euclidean_spearman", "value": -23.937922436585392}, {"type": "main_score", "value": -3.5467049032356264}, {"type": "manhattan_pearson", "value": -23.462933935885573}, {"type": "manhattan_spearman", "value": -22.402845778068887}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (fr-en)", "type": "mteb/sts17-crosslingual-sts", "config": "fr-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 26.412738827821325}, {"type": "cosine_spearman", "value": 21.096028679832475}, {"type": "euclidean_pearson", "value": -12.961356992788911}, {"type": "euclidean_spearman", "value": -13.439656615197324}, {"type": "main_score", "value": 21.096028679832475}, {"type": "manhattan_pearson", "value": -13.312399929525135}, {"type": "manhattan_spearman", "value": -13.320455244709303}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (it-en)", "type": "mteb/sts17-crosslingual-sts", "config": "it-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 18.315235047821027}, {"type": "cosine_spearman", "value": 15.405153060148468}, {"type": "euclidean_pearson", "value": -16.19883745793275}, {"type": "euclidean_spearman", "value": -16.332471299959188}, {"type": "main_score", "value": 15.405153060148468}, {"type": "manhattan_pearson", "value": -15.174493494372754}, {"type": "manhattan_spearman", "value": -14.235895631091836}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-en)", "type": "mteb/sts22-crosslingual-sts", "config": "de-en", "split": "test", "revision": "de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3"}, "metrics": [{"type": "cosine_pearson", "value": 20.46710263573031}, {"type": "cosine_spearman", "value": 28.326540334389122}, {"type": "euclidean_pearson", "value": 20.858737030398395}, {"type": "euclidean_spearman", "value": 29.872601047020126}, {"type": "main_score", "value": 28.326540334389122}, {"type": "manhattan_pearson", "value": 19.218328249978722}, {"type": "manhattan_spearman", "value": 33.264521141243655}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh-en)", "type": "mteb/sts22-crosslingual-sts", "config": "zh-en", "split": "test", "revision": "de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3"}, "metrics": [{"type": "cosine_pearson", "value": -3.5232243177317475}, {"type": "cosine_spearman", "value": 4.537053084710515}, {"type": "euclidean_pearson", "value": 6.374530133957361}, {"type": "euclidean_spearman", "value": 3.6684963723679562}, {"type": "main_score", "value": 4.537053084710515}, {"type": "manhattan_pearson", "value": 6.918896438279671}, {"type": "manhattan_spearman", "value": 1.9104862843510344}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3"}, "metrics": [{"type": "cosine_pearson", "value": 42.353863109448966}, {"type": "cosine_spearman", "value": 52.55694057880419}, {"type": "euclidean_pearson", "value": 41.58894055719116}, {"type": "euclidean_spearman", "value": 50.499978942016014}, {"type": "main_score", "value": 52.55694057880419}, {"type": "manhattan_pearson", "value": 39.23263050152607}, {"type": "manhattan_spearman", "value": 47.982776818718506}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl-en)", "type": "mteb/sts22-crosslingual-sts", "config": "pl-en", "split": "test", "revision": "de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3"}, "metrics": [{"type": "cosine_pearson", "value": 9.895824519159905}, {"type": "cosine_spearman", "value": 14.528808646639648}, {"type": "euclidean_pearson", "value": 30.766730901000265}, {"type": "euclidean_spearman", "value": 16.482305685897398}, {"type": "main_score", "value": 14.528808646639648}, {"type": "manhattan_pearson", "value": 32.72091783931039}, {"type": "manhattan_spearman", "value": 11.606377075910054}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-en)", "type": "mteb/sts22-crosslingual-sts", "config": "es-en", "split": "test", "revision": "de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3"}, "metrics": [{"type": "cosine_pearson", "value": 10.835100493377169}, {"type": "cosine_spearman", "value": 13.188080238562986}, {"type": "euclidean_pearson", "value": 13.222129117792575}, {"type": "euclidean_spearman", "value": 16.35349476750803}, {"type": "main_score", "value": 13.188080238562986}, {"type": "manhattan_pearson", "value": 18.24829227713276}, {"type": "manhattan_spearman", "value": 21.542234667592027}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark (default)", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cosine_pearson", "value": 65.71454631261894}, {"type": "cosine_spearman", "value": 65.48413591571544}, {"type": "euclidean_pearson", "value": 57.20872936896835}, {"type": "euclidean_spearman", "value": 57.60081037404292}, {"type": "main_score", "value": 65.48413591571544}, {"type": "manhattan_pearson", "value": 55.60537290238107}, {"type": "manhattan_spearman", "value": 56.096969186945564}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR (default)", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "main_score", "value": 68.56134632503664}, {"type": "map", "value": 68.56134632503664}, {"type": "mrr", "value": 88.76940234783373}, {"type": "nAUC_map_diff1", "value": 12.337237293429535}, {"type": "nAUC_map_max", "value": 56.05626340436826}, {"type": "nAUC_map_std", "value": 66.20136946235245}, {"type": "nAUC_mrr_diff1", "value": 49.13360859462996}, {"type": "nAUC_mrr_max", "value": 75.19817364500312}, {"type": "nAUC_mrr_std", "value": 71.27479674596098}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact (default)", "type": "mteb/scifact", "config": "default", "split": "test", "revision": "0228b52cf27578f30900b9e5271d331663a030d7"}, "metrics": [{"type": "main_score", "value": 45.168}, {"type": "map_at_1", "value": 31.722}, {"type": "map_at_10", "value": 40.361000000000004}, {"type": "map_at_100", "value": 41.412}, {"type": "map_at_1000", "value": 41.483}, {"type": "map_at_20", "value": 41.026}, {"type": "map_at_3", "value": 37.676}, {"type": "map_at_5", "value": 39.15}, {"type": "mrr_at_1", "value": 33.666666666666664}, {"type": "mrr_at_10", "value": 41.68544973544974}, {"type": "mrr_at_100", "value": 42.57351821618796}, {"type": "mrr_at_1000", "value": 42.63566974762014}, {"type": "mrr_at_20", "value": 42.24279031798382}, {"type": "mrr_at_3", "value": 39.16666666666666}, {"type": "mrr_at_5", "value": 40.56666666666666}, {"type": "nauc_map_at_1000_diff1", "value": 55.77535499706605}, {"type": "nauc_map_at_1000_max", "value": 37.686384513064496}, {"type": "nauc_map_at_1000_std", "value": -0.38356448588082925}, {"type": "nauc_map_at_100_diff1", "value": 55.76685805908298}, {"type": "nauc_map_at_100_max", "value": 37.69512830675277}, {"type": "nauc_map_at_100_std", "value": -0.3816790631470584}, {"type": "nauc_map_at_10_diff1", "value": 55.31825864401214}, {"type": "nauc_map_at_10_max", "value": 37.88770668112794}, {"type": "nauc_map_at_10_std", "value": -0.6860500769894244}, {"type": "nauc_map_at_1_diff1", "value": 62.113628227161165}, {"type": "nauc_map_at_1_max", "value": 37.183535942278596}, {"type": "nauc_map_at_1_std", "value": -3.5410481282149067}, {"type": "nauc_map_at_20_diff1", "value": 55.65751983454559}, {"type": "nauc_map_at_20_max", "value": 37.69345024816029}, {"type": "nauc_map_at_20_std", "value": -0.43593256548163833}, {"type": "nauc_map_at_3_diff1", "value": 55.82307496058825}, {"type": "nauc_map_at_3_max", "value": 36.720146164571474}, {"type": "nauc_map_at_3_std", "value": -2.819390810134275}, {"type": "nauc_map_at_5_diff1", "value": 55.53584523712401}, {"type": "nauc_map_at_5_max", "value": 37.845081976188375}, {"type": "nauc_map_at_5_std", "value": -1.9066901557785676}, {"type": "nauc_mrr_at_1000_diff1", "value": 56.418676727795024}, {"type": "nauc_mrr_at_1000_max", "value": 38.304224136608866}, {"type": "nauc_mrr_at_1000_std", "value": 2.4996505957652198}, {"type": "nauc_mrr_at_100_diff1", "value": 56.39703976435698}, {"type": "nauc_mrr_at_100_max", "value": 38.31871253356022}, {"type": "nauc_mrr_at_100_std", "value": 2.499322381767784}, {"type": "nauc_mrr_at_10_diff1", "value": 56.17576873119264}, {"type": "nauc_mrr_at_10_max", "value": 38.63458360266209}, {"type": "nauc_mrr_at_10_std", "value": 2.8572655679787973}, {"type": "nauc_mrr_at_1_diff1", "value": 63.26354576298176}, {"type": "nauc_mrr_at_1_max", "value": 38.41560245413969}, {"type": "nauc_mrr_at_1_std", "value": -0.17074584083479885}, {"type": "nauc_mrr_at_20_diff1", "value": 56.301767376204936}, {"type": "nauc_mrr_at_20_max", "value": 38.376041663808316}, {"type": "nauc_mrr_at_20_std", "value": 2.649049607362875}, {"type": "nauc_mrr_at_3_diff1", "value": 56.70849572743409}, {"type": "nauc_mrr_at_3_max", "value": 37.09106878190702}, {"type": "nauc_mrr_at_3_std", "value": 0.5218568736162024}, {"type": "nauc_mrr_at_5_diff1", "value": 56.116869610402674}, {"type": "nauc_mrr_at_5_max", "value": 38.448039539152745}, {"type": "nauc_mrr_at_5_std", "value": 1.7341042169043408}, {"type": "nauc_ndcg_at_1000_diff1", "value": 54.78225202376091}, {"type": "nauc_ndcg_at_1000_max", "value": 38.38144373884326}, {"type": "nauc_ndcg_at_1000_std", "value": 2.6358234061241586}, {"type": "nauc_ndcg_at_100_diff1", "value": 54.4093856226575}, {"type": "nauc_ndcg_at_100_max", "value": 38.60612682388555}, {"type": "nauc_ndcg_at_100_std", "value": 2.69908939213741}, {"type": "nauc_ndcg_at_10_diff1", "value": 52.832583000255795}, {"type": "nauc_ndcg_at_10_max", "value": 38.941545213039916}, {"type": "nauc_ndcg_at_10_std", "value": 2.4826858084884753}, {"type": "nauc_ndcg_at_1_diff1", "value": 63.26354576298176}, {"type": "nauc_ndcg_at_1_max", "value": 38.41560245413969}, {"type": "nauc_ndcg_at_1_std", "value": -0.17074584083479885}, {"type": "nauc_ndcg_at_20_diff1", "value": 53.5430044109149}, {"type": "nauc_ndcg_at_20_max", "value": 38.10605834841827}, {"type": "nauc_ndcg_at_20_std", "value": 2.5820729076155344}, {"type": "nauc_ndcg_at_3_diff1", "value": 53.98354338931932}, {"type": "nauc_ndcg_at_3_max", "value": 36.522639379347815}, {"type": "nauc_ndcg_at_3_std", "value": -1.9435738031229932}, {"type": "nauc_ndcg_at_5_diff1", "value": 53.263204590280175}, {"type": "nauc_ndcg_at_5_max", "value": 38.76301110063584}, {"type": "nauc_ndcg_at_5_std", "value": -0.44894792520114274}, {"type": "nauc_precision_at_1000_diff1", "value": 2.6725425569998733}, {"type": "nauc_precision_at_1000_max", "value": 18.217728894320416}, {"type": "nauc_precision_at_1000_std", "value": 41.76202644150659}, {"type": "nauc_precision_at_100_diff1", "value": 23.894022947191242}, {"type": "nauc_precision_at_100_max", "value": 30.465092081989397}, {"type": "nauc_precision_at_100_std", "value": 32.67941090228055}, {"type": "nauc_precision_at_10_diff1", "value": 35.758108716102925}, {"type": "nauc_precision_at_10_max", "value": 38.043682768211404}, {"type": "nauc_precision_at_10_std", "value": 18.94024295472207}, {"type": "nauc_precision_at_1_diff1", "value": 63.26354576298176}, {"type": "nauc_precision_at_1_max", "value": 38.41560245413969}, {"type": "nauc_precision_at_1_std", "value": -0.17074584083479885}, {"type": "nauc_precision_at_20_diff1", "value": 34.336560890067275}, {"type": "nauc_precision_at_20_max", "value": 31.7929720931013}, {"type": "nauc_precision_at_20_std", "value": 23.571932003154835}, {"type": "nauc_precision_at_3_diff1", "value": 44.2135740101036}, {"type": "nauc_precision_at_3_max", "value": 34.2245562189253}, {"type": "nauc_precision_at_3_std", "value": 2.9819692098799435}, {"type": "nauc_precision_at_5_diff1", "value": 40.3310935749158}, {"type": "nauc_precision_at_5_max", "value": 38.63563472800203}, {"type": "nauc_precision_at_5_std", "value": 9.335714313996466}, {"type": "nauc_recall_at_1000_diff1", "value": 56.9369714312583}, {"type": "nauc_recall_at_1000_max", "value": 45.8389590848331}, {"type": "nauc_recall_at_1000_std", "value": 36.35310239203547}, {"type": "nauc_recall_at_100_diff1", "value": 48.24197135141656}, {"type": "nauc_recall_at_100_max", "value": 42.702371394909264}, {"type": "nauc_recall_at_100_std", "value": 13.330140889544886}, {"type": "nauc_recall_at_10_diff1", "value": 43.30066118896596}, {"type": "nauc_recall_at_10_max", "value": 40.917885858677245}, {"type": "nauc_recall_at_10_std", "value": 9.071473475388245}, {"type": "nauc_recall_at_1_diff1", "value": 62.113628227161165}, {"type": "nauc_recall_at_1_max", "value": 37.183535942278596}, {"type": "nauc_recall_at_1_std", "value": -3.5410481282149067}, {"type": "nauc_recall_at_20_diff1", "value": 44.24119164214377}, {"type": "nauc_recall_at_20_max", "value": 37.145932987172344}, {"type": "nauc_recall_at_20_std", "value": 9.064570006703589}, {"type": "nauc_recall_at_3_diff1", "value": 47.503698426289645}, {"type": "nauc_recall_at_3_max", "value": 35.181130291364084}, {"type": "nauc_recall_at_3_std", "value": -4.399329816832574}, {"type": "nauc_recall_at_5_diff1", "value": 45.72301353292787}, {"type": "nauc_recall_at_5_max", "value": 40.71394881642516}, {"type": "nauc_recall_at_5_std", "value": -0.017691813104162315}, {"type": "ndcg_at_1", "value": 33.667}, {"type": "ndcg_at_10", "value": 45.168}, {"type": "ndcg_at_100", "value": 50.080000000000005}, {"type": "ndcg_at_1000", "value": 51.878}, {"type": "ndcg_at_20", "value": 47.394999999999996}, {"type": "ndcg_at_3", "value": 39.89}, {"type": "ndcg_at_5", "value": 42.418}, {"type": "precision_at_1", "value": 33.667}, {"type": "precision_at_10", "value": 6.4670000000000005}, {"type": "precision_at_100", "value": 0.9169999999999999}, {"type": "precision_at_1000", "value": 0.108}, {"type": "precision_at_20", "value": 3.733}, {"type": "precision_at_3", "value": 16.111}, {"type": "precision_at_5", "value": 11.133}, {"type": "recall_at_1", "value": 31.722}, {"type": "recall_at_10", "value": 58.833}, {"type": "recall_at_100", "value": 81.472}, {"type": "recall_at_1000", "value": 95.367}, {"type": "recall_at_20", "value": 67.333}, {"type": "recall_at_3", "value": 44.5}, {"type": "recall_at_5", "value": 50.693999999999996}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions (default)", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cosine_accuracy", "value": 99.6}, {"type": "cosine_accuracy_threshold", "value": 70.9090530872345}, {"type": "cosine_ap", "value": 84.58074609745917}, {"type": "cosine_f1", "value": 78.88324873096447}, {"type": "cosine_f1_threshold", "value": 67.8337812423706}, {"type": "cosine_precision", "value": 80.10309278350516}, {"type": "cosine_recall", "value": 77.7}, {"type": "dot_accuracy", "value": 99.08415841584159}, {"type": "dot_accuracy_threshold", "value": 66384.36279296875}, {"type": "dot_ap", "value": 40.87152918329808}, {"type": "dot_f1", "value": 43.734015345268546}, {"type": "dot_f1_threshold", "value": 51844.3115234375}, {"type": "dot_precision", "value": 38.11292719167905}, {"type": "dot_recall", "value": 51.300000000000004}, {"type": "euclidean_accuracy", "value": 99.34158415841584}, {"type": "euclidean_accuracy_threshold", "value": 1737.0550155639648}, {"type": "euclidean_ap", "value": 62.13537131791382}, {"type": "euclidean_f1", "value": 61.27982646420824}, {"type": "euclidean_f1_threshold", "value": 1902.7210235595703}, {"type": "euclidean_precision", "value": 66.9431279620853}, {"type": "euclidean_recall", "value": 56.49999999999999}, {"type": "main_score", "value": 84.58074648388171}, {"type": "manhattan_accuracy", "value": 99.29306930693069}, {"type": "manhattan_accuracy_threshold", "value": 31327.55126953125}, {"type": "manhattan_ap", "value": 57.216782641023634}, {"type": "manhattan_f1", "value": 57.296715131933226}, {"type": "manhattan_f1_threshold", "value": 35300.360107421875}, {"type": "manhattan_precision", "value": 62.07701283547258}, {"type": "manhattan_recall", "value": 53.2}, {"type": "max_accuracy", "value": 99.6}, {"type": "max_ap", "value": 84.58074648388171}, {"type": "max_f1", "value": 78.88324873096447}, {"type": "max_precision", "value": 80.10309278350516}, {"type": "max_recall", "value": 77.7}, {"type": "similarity_accuracy", "value": 99.6}, {"type": "similarity_accuracy_threshold", "value": 70.90907096862793}, {"type": "similarity_ap", "value": 84.58074648388171}, {"type": "similarity_f1", "value": 78.88324873096447}, {"type": "similarity_f1_threshold", "value": 67.83377528190613}, {"type": "similarity_precision", "value": 80.10309278350516}, {"type": "similarity_recall", "value": 77.7}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering (default)", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "main_score", "value": 29.912118265776584}, {"type": "v_measure", "value": 29.912118265776584}, {"type": "v_measure_std", "value": 4.886538571793255}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P (default)", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "main_score", "value": 26.453873918768515}, {"type": "v_measure", "value": 26.453873918768515}, {"type": "v_measure_std", "value": 1.585352021846518}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions (default)", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "main_score", "value": 43.20040993546698}, {"type": "map", "value": 43.20040993546698}, {"type": "mrr", "value": 43.80615503777269}, {"type": "nAUC_map_diff1", "value": 35.32927557160638}, {"type": "nAUC_map_max", "value": 16.99796264171325}, {"type": "nAUC_map_std", "value": 8.295193352979423}, {"type": "nAUC_mrr_diff1", "value": 34.8181761798891}, {"type": "nAUC_mrr_max", "value": 17.88328922464567}, {"type": "nAUC_mrr_std", "value": 9.16364844640502}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval (default)", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cosine_pearson", "value": 29.837020935210244}, {"type": "cosine_spearman", "value": 29.129192154438023}, {"type": "dot_pearson", "value": 18.178493108017275}, {"type": "dot_spearman", "value": 20.21762456537728}, {"type": "main_score", "value": 29.129192154438023}, {"type": "pearson", "value": 29.837020935210244}, {"type": "spearman", "value": 29.129192154438023}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID (default)", "type": "mteb/trec-covid", "config": "default", "split": "test", "revision": "bb9466bac8153a0349341eb1b22e06409e78ef4e"}, "metrics": [{"type": "main_score", "value": 44.76}, {"type": "map_at_1", "value": 0.122}, {"type": "map_at_10", "value": 0.878}, {"type": "map_at_100", "value": 4.018999999999999}, {"type": "map_at_1000", "value": 9.258}, {"type": "map_at_20", "value": 1.415}, {"type": "map_at_3", "value": 0.338}, {"type": "map_at_5", "value": 0.526}, {"type": "mrr_at_1", "value": 56.00000000000001}, {"type": "mrr_at_10", "value": 66.07222222222222}, {"type": "mrr_at_100", "value": 66.50064823204359}, {"type": "mrr_at_1000", "value": 66.51969585109121}, {"type": "mrr_at_20", "value": 66.32619047619048}, {"type": "mrr_at_3", "value": 64.0}, {"type": "mrr_at_5", "value": 65.4}, {"type": "nauc_map_at_1000_diff1", "value": -8.083047410631284}, {"type": "nauc_map_at_1000_max", "value": 47.53446279402127}, {"type": "nauc_map_at_1000_std", "value": 59.96216691295325}, {"type": "nauc_map_at_100_diff1", "value": -7.739773992175417}, {"type": "nauc_map_at_100_max", "value": 30.194947003511906}, {"type": "nauc_map_at_100_std", "value": 44.21694014053059}, {"type": "nauc_map_at_10_diff1", "value": -8.68905409160312}, {"type": "nauc_map_at_10_max", "value": 1.0122820499818854}, {"type": "nauc_map_at_10_std", "value": 10.974665544255386}, {"type": "nauc_map_at_1_diff1", "value": -16.880540022219577}, {"type": "nauc_map_at_1_max", "value": -1.6691558276733682}, {"type": "nauc_map_at_1_std", "value": 6.632235219994278}, {"type": "nauc_map_at_20_diff1", "value": -10.664828887394167}, {"type": "nauc_map_at_20_max", "value": 8.898505999792377}, {"type": "nauc_map_at_20_std", "value": 19.532041203224537}, {"type": "nauc_map_at_3_diff1", "value": -9.330417800583005}, {"type": "nauc_map_at_3_max", "value": -2.790285962665549}, {"type": "nauc_map_at_3_std", "value": 7.4958144373878115}, {"type": "nauc_map_at_5_diff1", "value": -8.040423130198358}, {"type": "nauc_map_at_5_max", "value": -3.3129010825045415}, {"type": "nauc_map_at_5_std", "value": 7.140151615092149}, {"type": "nauc_mrr_at_1000_diff1", "value": 1.841967269111446}, {"type": "nauc_mrr_at_1000_max", "value": 19.218649788535302}, {"type": "nauc_mrr_at_1000_std", "value": 34.05865638916581}, {"type": "nauc_mrr_at_100_diff1", "value": 1.7162098924657265}, {"type": "nauc_mrr_at_100_max", "value": 19.23051404537602}, {"type": "nauc_mrr_at_100_std", "value": 34.043079302164195}, {"type": "nauc_mrr_at_10_diff1", "value": 2.671707378955639}, {"type": "nauc_mrr_at_10_max", "value": 19.61245805830406}, {"type": "nauc_mrr_at_10_std", "value": 33.860985121025664}, {"type": "nauc_mrr_at_1_diff1", "value": -4.9369747899159915}, {"type": "nauc_mrr_at_1_max", "value": 18.70315693845101}, {"type": "nauc_mrr_at_1_std", "value": 31.19747899159659}, {"type": "nauc_mrr_at_20_diff1", "value": 2.2679812975747393}, {"type": "nauc_mrr_at_20_max", "value": 18.88077606059037}, {"type": "nauc_mrr_at_20_std", "value": 34.45425371871214}, {"type": "nauc_mrr_at_3_diff1", "value": 2.8102165970771873}, {"type": "nauc_mrr_at_3_max", "value": 19.9547668754349}, {"type": "nauc_mrr_at_3_std", "value": 32.230232254697256}, {"type": "nauc_mrr_at_5_diff1", "value": 2.056260588169657}, {"type": "nauc_mrr_at_5_max", "value": 20.00122859400373}, {"type": "nauc_mrr_at_5_std", "value": 33.385407684686385}, {"type": "nauc_ndcg_at_1000_diff1", "value": -10.634273510767326}, {"type": "nauc_ndcg_at_1000_max", "value": 36.83968691011661}, {"type": "nauc_ndcg_at_1000_std", "value": 52.736058094433346}, {"type": "nauc_ndcg_at_100_diff1", "value": 0.9900193680768492}, {"type": "nauc_ndcg_at_100_max", "value": 33.837077460710816}, {"type": "nauc_ndcg_at_100_std", "value": 47.8838924407509}, {"type": "nauc_ndcg_at_10_diff1", "value": -0.17969764223238982}, {"type": "nauc_ndcg_at_10_max", "value": 20.98725746563983}, {"type": "nauc_ndcg_at_10_std", "value": 34.94240929181837}, {"type": "nauc_ndcg_at_1_diff1", "value": -15.90606217193831}, {"type": "nauc_ndcg_at_1_max", "value": 14.845386058908314}, {"type": "nauc_ndcg_at_1_std", "value": 27.80603225543255}, {"type": "nauc_ndcg_at_20_diff1", "value": -2.610422392632454}, {"type": "nauc_ndcg_at_20_max", "value": 23.712304742527216}, {"type": "nauc_ndcg_at_20_std", "value": 37.068579726264616}, {"type": "nauc_ndcg_at_3_diff1", "value": -1.296272800008927}, {"type": "nauc_ndcg_at_3_max", "value": 21.18656426647708}, {"type": "nauc_ndcg_at_3_std", "value": 35.00996581698709}, {"type": "nauc_ndcg_at_5_diff1", "value": 0.9228761005863567}, {"type": "nauc_ndcg_at_5_max", "value": 20.533612497239876}, {"type": "nauc_ndcg_at_5_std", "value": 33.746097407453505}, {"type": "nauc_precision_at_1000_diff1", "value": 2.212860642793429}, {"type": "nauc_precision_at_1000_max", "value": 42.83693570346947}, {"type": "nauc_precision_at_1000_std", "value": 56.34352031668012}, {"type": "nauc_precision_at_100_diff1", "value": 3.0398947714805473}, {"type": "nauc_precision_at_100_max", "value": 37.33236107395733}, {"type": "nauc_precision_at_100_std", "value": 51.46402436623219}, {"type": "nauc_precision_at_10_diff1", "value": 7.751232774751116}, {"type": "nauc_precision_at_10_max", "value": 23.34708251923681}, {"type": "nauc_precision_at_10_std", "value": 35.85367282451008}, {"type": "nauc_precision_at_1_diff1", "value": -4.9369747899159915}, {"type": "nauc_precision_at_1_max", "value": 18.70315693845101}, {"type": "nauc_precision_at_1_std", "value": 31.19747899159659}, {"type": "nauc_precision_at_20_diff1", "value": 2.6773822842226416}, {"type": "nauc_precision_at_20_max", "value": 27.773465147606125}, {"type": "nauc_precision_at_20_std", "value": 40.8346461486944}, {"type": "nauc_precision_at_3_diff1", "value": 10.025088532578964}, {"type": "nauc_precision_at_3_max", "value": 23.118618169053402}, {"type": "nauc_precision_at_3_std", "value": 36.718048256708336}, {"type": "nauc_precision_at_5_diff1", "value": 10.65022351628208}, {"type": "nauc_precision_at_5_max", "value": 21.415166686410064}, {"type": "nauc_precision_at_5_std", "value": 34.26813225180961}, {"type": "nauc_recall_at_1000_diff1", "value": -15.087404046972116}, {"type": "nauc_recall_at_1000_max", "value": 36.36800488936171}, {"type": "nauc_recall_at_1000_std", "value": 51.729821669192646}, {"type": "nauc_recall_at_100_diff1", "value": -10.615762204096805}, {"type": "nauc_recall_at_100_max", "value": 24.08701047895384}, {"type": "nauc_recall_at_100_std", "value": 39.67258536375483}, {"type": "nauc_recall_at_10_diff1", "value": -7.067104621282379}, {"type": "nauc_recall_at_10_max", "value": -1.9673720028196857}, {"type": "nauc_recall_at_10_std", "value": 5.8769977919557785}, {"type": "nauc_recall_at_1_diff1", "value": -16.880540022219577}, {"type": "nauc_recall_at_1_max", "value": -1.6691558276733682}, {"type": "nauc_recall_at_1_std", "value": 6.632235219994278}, {"type": "nauc_recall_at_20_diff1", "value": -10.004017517116134}, {"type": "nauc_recall_at_20_max", "value": 4.75366175077321}, {"type": "nauc_recall_at_20_std", "value": 17.49313281300582}, {"type": "nauc_recall_at_3_diff1", "value": 0.5629010662361658}, {"type": "nauc_recall_at_3_max", "value": -7.882772867263189}, {"type": "nauc_recall_at_3_std", "value": 2.238252718990748}, {"type": "nauc_recall_at_5_diff1", "value": -2.374440704673045}, {"type": "nauc_recall_at_5_max", "value": -6.804152379891169}, {"type": "nauc_recall_at_5_std", "value": 1.5154169968307243}, {"type": "ndcg_at_1", "value": 50.0}, {"type": "ndcg_at_10", "value": 44.76}, {"type": "ndcg_at_100", "value": 31.022}, {"type": "ndcg_at_1000", "value": 26.223000000000003}, {"type": "ndcg_at_20", "value": 41.703}, {"type": "ndcg_at_3", "value": 49.838}, {"type": "ndcg_at_5", "value": 48.219}, {"type": "precision_at_1", "value": 56.00000000000001}, {"type": "precision_at_10", "value": 48.0}, {"type": "precision_at_100", "value": 31.66}, {"type": "precision_at_1000", "value": 12.598}, {"type": "precision_at_20", "value": 44.1}, {"type": "precision_at_3", "value": 55.333}, {"type": "precision_at_5", "value": 52.400000000000006}, {"type": "recall_at_1", "value": 0.122}, {"type": "recall_at_10", "value": 1.093}, {"type": "recall_at_100", "value": 6.6339999999999995}, {"type": "recall_at_1000", "value": 24.934}, {"type": "recall_at_20", "value": 1.926}, {"type": "recall_at_3", "value": 0.379}, {"type": "recall_at_5", "value": 0.611}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020 (default)", "type": "mteb/touche2020", "config": "default", "split": "test", "revision": "a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f"}, "metrics": [{"type": "main_score", "value": 11.737}, {"type": "map_at_1", "value": 0.86}, {"type": "map_at_10", "value": 3.569}, {"type": "map_at_100", "value": 6.272}, {"type": "map_at_1000", "value": 7.591}, {"type": "map_at_20", "value": 4.599}, {"type": "map_at_3", "value": 2.1229999999999998}, {"type": "map_at_5", "value": 2.738}, {"type": "mrr_at_1", "value": 14.285714285714285}, {"type": "mrr_at_10", "value": 29.435536119209587}, {"type": "mrr_at_100", "value": 30.863925639814255}, {"type": "mrr_at_1000", "value": 30.863925639814255}, {"type": "mrr_at_20", "value": 30.459159854417955}, {"type": "mrr_at_3", "value": 25.510204081632658}, {"type": "mrr_at_5", "value": 27.34693877551021}, {"type": "nauc_map_at_1000_diff1", "value": -15.225998878041644}, {"type": "nauc_map_at_1000_max", "value": -37.62784726123152}, {"type": "nauc_map_at_1000_std", "value": -40.49662774337752}, {"type": "nauc_map_at_100_diff1", "value": -17.241253449657865}, {"type": "nauc_map_at_100_max", "value": -39.87742899339114}, {"type": "nauc_map_at_100_std", "value": -43.461254035113015}, {"type": "nauc_map_at_10_diff1", "value": -18.2332059968299}, {"type": "nauc_map_at_10_max", "value": -33.098533635572316}, {"type": "nauc_map_at_10_std", "value": -36.84786857582744}, {"type": "nauc_map_at_1_diff1", "value": -14.429325321729767}, {"type": "nauc_map_at_1_max", "value": -27.646469766953775}, {"type": "nauc_map_at_1_std", "value": -22.319540072780857}, {"type": "nauc_map_at_20_diff1", "value": -20.20731257532461}, {"type": "nauc_map_at_20_max", "value": -38.80220712468868}, {"type": "nauc_map_at_20_std", "value": -42.26801449643297}, {"type": "nauc_map_at_3_diff1", "value": -20.779843046007446}, {"type": "nauc_map_at_3_max", "value": -39.53842231266448}, {"type": "nauc_map_at_3_std", "value": -33.56558692084304}, {"type": "nauc_map_at_5_diff1", "value": -19.66219267837773}, {"type": "nauc_map_at_5_max", "value": -37.06326821351946}, {"type": "nauc_map_at_5_std", "value": -36.957816069501106}, {"type": "nauc_mrr_at_1000_diff1", "value": -18.677101035122053}, {"type": "nauc_mrr_at_1000_max", "value": -35.95960963659799}, {"type": "nauc_mrr_at_1000_std", "value": -37.756381781688766}, {"type": "nauc_mrr_at_100_diff1", "value": -18.677101035122053}, {"type": "nauc_mrr_at_100_max", "value": -35.95960963659799}, {"type": "nauc_mrr_at_100_std", "value": -37.756381781688766}, {"type": "nauc_mrr_at_10_diff1", "value": -18.191174363420938}, {"type": "nauc_mrr_at_10_max", "value": -36.36477111799858}, {"type": "nauc_mrr_at_10_std", "value": -39.49983032196089}, {"type": "nauc_mrr_at_1_diff1", "value": -12.86145482800598}, {"type": "nauc_mrr_at_1_max", "value": -24.487052771897265}, {"type": "nauc_mrr_at_1_std", "value": -20.52556557495329}, {"type": "nauc_mrr_at_20_diff1", "value": -18.60997224510311}, {"type": "nauc_mrr_at_20_max", "value": -35.79812432900392}, {"type": "nauc_mrr_at_20_std", "value": -38.30897001988249}, {"type": "nauc_mrr_at_3_diff1", "value": -25.212140640066988}, {"type": "nauc_mrr_at_3_max", "value": -37.42857037379736}, {"type": "nauc_mrr_at_3_std", "value": -33.92966300567053}, {"type": "nauc_mrr_at_5_diff1", "value": -20.640207781943023}, {"type": "nauc_mrr_at_5_max", "value": -35.90540839091833}, {"type": "nauc_mrr_at_5_std", "value": -37.12194516618917}, {"type": "nauc_ndcg_at_1000_diff1", "value": -0.11963001842743652}, {"type": "nauc_ndcg_at_1000_max", "value": -27.9178453384242}, {"type": "nauc_ndcg_at_1000_std", "value": -29.166624762081454}, {"type": "nauc_ndcg_at_100_diff1", "value": -12.091987337723797}, {"type": "nauc_ndcg_at_100_max", "value": -40.82288385710299}, {"type": "nauc_ndcg_at_100_std", "value": -46.76058302199178}, {"type": "nauc_ndcg_at_10_diff1", "value": -15.828838900116663}, {"type": "nauc_ndcg_at_10_max", "value": -28.47740914640201}, {"type": "nauc_ndcg_at_10_std", "value": -39.61604315349557}, {"type": "nauc_ndcg_at_1_diff1", "value": -14.384548055467114}, {"type": "nauc_ndcg_at_1_max", "value": -22.305774061633038}, {"type": "nauc_ndcg_at_1_std", "value": -21.059675286871425}, {"type": "nauc_ndcg_at_20_diff1", "value": -18.484696865224056}, {"type": "nauc_ndcg_at_20_max", "value": -36.75133962699779}, {"type": "nauc_ndcg_at_20_std", "value": -45.00325838241873}, {"type": "nauc_ndcg_at_3_diff1", "value": -19.074080663504287}, {"type": "nauc_ndcg_at_3_max", "value": -32.15749618445631}, {"type": "nauc_ndcg_at_3_std", "value": -31.15778856351426}, {"type": "nauc_ndcg_at_5_diff1", "value": -17.075509240224072}, {"type": "nauc_ndcg_at_5_max", "value": -30.166046803360015}, {"type": "nauc_ndcg_at_5_std", "value": -35.59973493388717}, {"type": "nauc_precision_at_1000_diff1", "value": 21.84245546736574}, {"type": "nauc_precision_at_1000_max", "value": 38.516370901785876}, {"type": "nauc_precision_at_1000_std", "value": 35.95207951618072}, {"type": "nauc_precision_at_100_diff1", "value": 1.3876384351895321}, {"type": "nauc_precision_at_100_max", "value": -17.672181963540233}, {"type": "nauc_precision_at_100_std", "value": -35.100445067927325}, {"type": "nauc_precision_at_10_diff1", "value": -8.38470122188378}, {"type": "nauc_precision_at_10_max", "value": -21.522897385575003}, {"type": "nauc_precision_at_10_std", "value": -42.22825505115226}, {"type": "nauc_precision_at_1_diff1", "value": -12.86145482800598}, {"type": "nauc_precision_at_1_max", "value": -24.487052771897265}, {"type": "nauc_precision_at_1_std", "value": -20.52556557495329}, {"type": "nauc_precision_at_20_diff1", "value": -16.93969917788429}, {"type": "nauc_precision_at_20_max", "value": -30.66989763742793}, {"type": "nauc_precision_at_20_std", "value": -46.641569381752156}, {"type": "nauc_precision_at_3_diff1", "value": -20.209351145881417}, {"type": "nauc_precision_at_3_max", "value": -37.489404692159376}, {"type": "nauc_precision_at_3_std", "value": -36.11843668070083}, {"type": "nauc_precision_at_5_diff1", "value": -13.00046064709639}, {"type": "nauc_precision_at_5_max", "value": -29.182846254852958}, {"type": "nauc_precision_at_5_std", "value": -41.475754864735954}, {"type": "nauc_recall_at_1000_diff1", "value": 12.384650251660787}, {"type": "nauc_recall_at_1000_max", "value": -22.150720232837372}, {"type": "nauc_recall_at_1000_std", "value": -4.87263784450895}, {"type": "nauc_recall_at_100_diff1", "value": -10.460274590185362}, {"type": "nauc_recall_at_100_max", "value": -46.395760301872606}, {"type": "nauc_recall_at_100_std", "value": -44.967105074272865}, {"type": "nauc_recall_at_10_diff1", "value": -15.886566681130422}, {"type": "nauc_recall_at_10_max", "value": -36.08360858042893}, {"type": "nauc_recall_at_10_std", "value": -43.44706180483}, {"type": "nauc_recall_at_1_diff1", "value": -14.429325321729767}, {"type": "nauc_recall_at_1_max", "value": -27.646469766953775}, {"type": "nauc_recall_at_1_std", "value": -22.319540072780857}, {"type": "nauc_recall_at_20_diff1", "value": -20.572085163574663}, {"type": "nauc_recall_at_20_max", "value": -45.09259936557314}, {"type": "nauc_recall_at_20_std", "value": -50.36930511127456}, {"type": "nauc_recall_at_3_diff1", "value": -25.55698987960452}, {"type": "nauc_recall_at_3_max", "value": -44.841701912628395}, {"type": "nauc_recall_at_3_std", "value": -33.629299677212664}, {"type": "nauc_recall_at_5_diff1", "value": -21.025629383069223}, {"type": "nauc_recall_at_5_max", "value": -41.163164440917406}, {"type": "nauc_recall_at_5_std", "value": -40.978074434880654}, {"type": "ndcg_at_1", "value": 13.264999999999999}, {"type": "ndcg_at_10", "value": 11.737}, {"type": "ndcg_at_100", "value": 20.893}, {"type": "ndcg_at_1000", "value": 34.148}, {"type": "ndcg_at_20", "value": 12.781}, {"type": "ndcg_at_3", "value": 13.961000000000002}, {"type": "ndcg_at_5", "value": 12.735}, {"type": "precision_at_1", "value": 14.285999999999998}, {"type": "precision_at_10", "value": 11.429}, {"type": "precision_at_100", "value": 5.061}, {"type": "precision_at_1000", "value": 1.327}, {"type": "precision_at_20", "value": 9.796000000000001}, {"type": "precision_at_3", "value": 17.007}, {"type": "precision_at_5", "value": 14.693999999999999}, {"type": "recall_at_1", "value": 0.86}, {"type": "recall_at_10", "value": 7.962}, {"type": "recall_at_100", "value": 31.343}, {"type": "recall_at_1000", "value": 72.173}, {"type": "recall_at_20", "value": 13.209000000000001}, {"type": "recall_at_3", "value": 3.4639999999999995}, {"type": "recall_at_5", "value": 5.061}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification (default)", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 63.30078125000001}, {"type": "ap", "value": 10.382758929598857}, {"type": "ap_weighted", "value": 10.382758929598857}, {"type": "f1", "value": 47.95923360740176}, {"type": "f1_weighted", "value": 71.3431138095925}, {"type": "main_score", "value": 63.30078125000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification (default)", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 49.787775891341255}, {"type": "f1", "value": 49.934050367781495}, {"type": "f1_weighted", "value": 49.25778188511025}, {"type": "main_score", "value": 49.787775891341255}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering (default)", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "main_score", "value": 20.13387853092354}, {"type": "v_measure", "value": 20.13387853092354}, {"type": "v_measure_std", "value": 2.2532678030932582}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015 (default)", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cosine_accuracy", "value": 82.44620611551528}, {"type": "cosine_accuracy_threshold", "value": 67.1613335609436}, {"type": "cosine_ap", "value": 61.027812391627634}, {"type": "cosine_f1", "value": 57.648077160875474}, {"type": "cosine_f1_threshold", "value": 60.86677312850952}, {"type": "cosine_precision", "value": 54.24714917384221}, {"type": "cosine_recall", "value": 61.50395778364116}, {"type": "dot_accuracy", "value": 78.45860404124694}, {"type": "dot_accuracy_threshold", "value": 83239.31884765625}, {"type": "dot_ap", "value": 44.32467940837404}, {"type": "dot_f1", "value": 47.685779137471634}, {"type": "dot_f1_threshold", "value": 55795.2392578125}, {"type": "dot_precision", "value": 38.08923222449945}, {"type": "dot_recall", "value": 63.746701846965706}, {"type": "euclidean_accuracy", "value": 80.58055671454967}, {"type": "euclidean_accuracy_threshold", "value": 2302.2579193115234}, {"type": "euclidean_ap", "value": 55.2462162515812}, {"type": "euclidean_f1", "value": 54.27702017356023}, {"type": "euclidean_f1_threshold", "value": 2842.241096496582}, {"type": "euclidean_precision", "value": 47.37359826873893}, {"type": "euclidean_recall", "value": 63.53562005277045}, {"type": "main_score", "value": 61.027837268240226}, {"type": "manhattan_accuracy", "value": 80.77129403349824}, {"type": "manhattan_accuracy_threshold", "value": 43584.36279296875}, {"type": "manhattan_ap", "value": 56.045117634111655}, {"type": "manhattan_f1", "value": 54.80427046263346}, {"type": "manhattan_f1_threshold", "value": 51295.8740234375}, {"type": "manhattan_precision", "value": 49.78448275862069}, {"type": "manhattan_recall", "value": 60.94986807387863}, {"type": "max_accuracy", "value": 82.44620611551528}, {"type": "max_ap", "value": 61.027837268240226}, {"type": "max_f1", "value": 57.648077160875474}, {"type": "max_precision", "value": 54.24714917384221}, {"type": "max_recall", "value": 63.746701846965706}, {"type": "similarity_accuracy", "value": 82.44620611551528}, {"type": "similarity_accuracy_threshold", "value": 67.1613335609436}, {"type": "similarity_ap", "value": 61.027837268240226}, {"type": "similarity_f1", "value": 57.648077160875474}, {"type": "similarity_f1_threshold", "value": 60.866761207580566}, {"type": "similarity_precision", "value": 54.24714917384221}, {"type": "similarity_recall", "value": 61.50395778364116}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus (default)", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cosine_accuracy", "value": 86.56032910311639}, {"type": "cosine_accuracy_threshold", "value": 63.58056664466858}, {"type": "cosine_ap", "value": 80.36069089360147}, {"type": "cosine_f1", "value": 72.49717349283344}, {"type": "cosine_f1_threshold", "value": 57.18348026275635}, {"type": "cosine_precision", "value": 68.87256600374194}, {"type": "cosine_recall", "value": 76.52448413920541}, {"type": "dot_accuracy", "value": 83.29064307059417}, {"type": "dot_accuracy_threshold", "value": 39571.136474609375}, {"type": "dot_ap", "value": 70.9168154298791}, {"type": "dot_f1", "value": 65.80363636363637}, {"type": "dot_f1_threshold", "value": 33795.39489746094}, {"type": "dot_precision", "value": 62.348401323043}, {"type": "dot_recall", "value": 69.66430551278103}, {"type": "euclidean_accuracy", "value": 83.87472348352544}, {"type": "euclidean_accuracy_threshold", "value": 1921.6852188110352}, {"type": "euclidean_ap", "value": 72.19667035000438}, {"type": "euclidean_f1", "value": 64.49932928272706}, {"type": "euclidean_f1_threshold", "value": 2122.101593017578}, {"type": "euclidean_precision", "value": 66.14338889787992}, {"type": "euclidean_recall", "value": 62.935016938712664}, {"type": "main_score", "value": 80.36069259910931}, {"type": "manhattan_accuracy", "value": 83.8514378856677}, {"type": "manhattan_accuracy_threshold", "value": 35123.6572265625}, {"type": "manhattan_ap", "value": 72.24797710989144}, {"type": "manhattan_f1", "value": 64.65182603184662}, {"type": "manhattan_f1_threshold", "value": 38842.54150390625}, {"type": "manhattan_precision", "value": 66.57692935225975}, {"type": "manhattan_recall", "value": 62.83492454573453}, {"type": "max_accuracy", "value": 86.56032910311639}, {"type": "max_ap", "value": 80.36069259910931}, {"type": "max_f1", "value": 72.49717349283344}, {"type": "max_precision", "value": 68.87256600374194}, {"type": "max_recall", "value": 76.52448413920541}, {"type": "similarity_accuracy", "value": 86.56032910311639}, {"type": "similarity_accuracy_threshold", "value": 63.58058452606201}, {"type": "similarity_ap", "value": 80.36069259910931}, {"type": "similarity_f1", "value": 72.49717349283344}, {"type": "similarity_f1_threshold", "value": 57.18348026275635}, {"type": "similarity_precision", "value": 68.87256600374194}, {"type": "similarity_recall", "value": 76.52448413920541}]}]}]}
dataset
null
524
mradermacher/Einstein-v5-v0.2-7B-i1-GGUF
mradermacher
null
[ "transformers", "gguf", "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "base_model:Weyaxi/Einstein-v5-v0.2-7B", "base_model:quantized:Weyaxi/Einstein-v5-v0.2-7B", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
2024-11-18T05:01:28Z
2024-11-18T06:31:10+00:00
256
0
--- base_model: Weyaxi/Einstein-v5-v0.2-7B datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction language: - en library_name: transformers license: other tags: - axolotl - generated_from_trainer - Mistral - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
[ "SCIQ" ]
Non_BioNLP
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v5-v0.2-7B-i1-GGUF/resolve/main/Einstein-v5-v0.2-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
{"base_model": "Weyaxi/Einstein-v5-v0.2-7B", "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval", "allenai/WildChat", "microsoft/orca-math-word-problems-200k", "openchat/openchat_sharegpt4_dataset", "teknium/GPTeacher-General-Instruct", "m-a-p/CodeFeedback-Filtered-Instruction"], "language": ["en"], "library_name": "transformers", "license": "other", "tags": ["axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math"], "quantized_by": "mradermacher"}
dataset
null
525
fblgit/UNA-ThePitbull-21.4-v1
fblgit
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "UNA", "juanako", "conversational", "license:afl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-05-24T15:41:19Z
2024-05-28T14:56:31+00:00
19
5
--- library_name: transformers license: afl-3.0 tags: - UNA - juanako --- # For a better performance check out our v2 at [fblgit/UNA-ThePitbull-21.4B-v2](https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2) # UNA-ThePitbull 21.4B v1 Introducing the best LLM in the industry. Nearly as good as a 70B, just a 21.4B based on saltlux/luxia-21.4b-alignment-v1.0 ![UNA - ThePitbull 21.4B v1](https://huggingface.co/fblgit/UNA-ThePitbull-21.4-v1/resolve/main/UNA-ThePitbull.png) This model has not been poisoned to score high and be useless. We release him becaues its the real deal of EQ & IQ all together in a crazy powerful smart and conversational model. So far the #1 of them at 25/5/2024 Quant version available at [bartowski/UNA-ThePitbull-21.4-v1-GGUF](https://huggingface.co/bartowski/UNA-ThePitbull-21.4-v1-GGUF) # For a better performance check out our v2 at [fblgit/UNA-ThePitbull-21.4B-v2](https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2) # Evaluations Can only be compared with its non-una base model: the original luxia-21.4b. ## UNA (VLLM) Evaluations ``` | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |--------------|------:|----------------|-----:|-----------|-----:|---|-----:| |gsm8k | 3|strict-match | 5|exact_match|0.7566|± |0.0118| | | |flexible-extract| 5|exact_match|0.7582|± |0.0118| |hellaswag | 1|none | 10|acc |0.8168|± |0.0039| | | |none | 10|acc_norm |0.9188|± |0.0027| |winogrande | 1|none | 5|acc |0.8635|± |0.0097| |mmlu | N/A|none | 0|acc |0.6444|± |0.0038| |arc_challenge | 1|none | 25|acc |0.7747|± |0.0122| | | |none | 25|acc_norm |0.7850|± |0.0120| |truthfulqa_mc2| 2|none | 0|acc |0.7902|± |0.0134| |mathqa | 1|none | 0|acc |0.4030|± | 0.009| | | |none | 0|acc_norm |0.4034|± | 0.009| |pubmedqa | 1|none | 0|acc |0.6860|± |0.0208| |boolq | 2|none | 0|acc |0.8401|± |0.0064| ``` ## Original (VLLM) Evaluations ``` | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |--------------|------:|----------------|-----:|-----------|-----:|---|-----:| |gsm8k | 3|strict-match | 5|exact_match|0.7528|± |0.0119| | | |flexible-extract| 5|exact_match|0.7521|± |0.0119| |hellaswag | 1|none | 10|acc |0.8117|± |0.0039| | | |none | 10|acc_norm |0.9167|± |0.0028| |winogrande | 1|none | 5|acc |0.8682|± |0.0095| |mmlu | N/A|none | 0|acc |0.6448|± |0.0038| |arc_challenge | 1|none | 25|acc |0.7688|± |0.0123| | | |none | 25|acc_norm |0.7730|± |0.0122| |truthfulqa_mc2| 2|none | 0|acc |0.7895|± |0.0133| |mathqa | 1|none | 0|acc |0.4000|± | 0.009| | | |none | 0|acc_norm |0.4003|± | 0.009| |pubmedqa | 1|none | 0|acc |0.6680|± |0.0211| |boolq | 2|none | 0|acc |0.8346|± |0.0065| ``` ## UNA Details Only MLP were Uniformed leaving room for further optimisations. You should be able to perform a SFT+DPO again on this model at moderate speeds. 1e-4/2e-5/etc.
[ "PUBMEDQA" ]
Non_BioNLP
# For a better performance check out our v2 at [fblgit/UNA-ThePitbull-21.4B-v2](https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2) # UNA-ThePitbull 21.4B v1 Introducing the best LLM in the industry. Nearly as good as a 70B, just a 21.4B based on saltlux/luxia-21.4b-alignment-v1.0 ![UNA - ThePitbull 21.4B v1](https://huggingface.co/fblgit/UNA-ThePitbull-21.4-v1/resolve/main/UNA-ThePitbull.png) This model has not been poisoned to score high and be useless. We release him becaues its the real deal of EQ & IQ all together in a crazy powerful smart and conversational model. So far the #1 of them at 25/5/2024 Quant version available at [bartowski/UNA-ThePitbull-21.4-v1-GGUF](https://huggingface.co/bartowski/UNA-ThePitbull-21.4-v1-GGUF) # For a better performance check out our v2 at [fblgit/UNA-ThePitbull-21.4B-v2](https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2) # Evaluations Can only be compared with its non-una base model: the original luxia-21.4b. ## UNA (VLLM) Evaluations ``` | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |--------------|------:|----------------|-----:|-----------|-----:|---|-----:| |gsm8k | 3|strict-match | 5|exact_match|0.7566|± |0.0118| | | |flexible-extract| 5|exact_match|0.7582|± |0.0118| |hellaswag | 1|none | 10|acc |0.8168|± |0.0039| | | |none | 10|acc_norm |0.9188|± |0.0027| |winogrande | 1|none | 5|acc |0.8635|± |0.0097| |mmlu | N/A|none | 0|acc |0.6444|± |0.0038| |arc_challenge | 1|none | 25|acc |0.7747|± |0.0122| | | |none | 25|acc_norm |0.7850|± |0.0120| |truthfulqa_mc2| 2|none | 0|acc |0.7902|± |0.0134| |mathqa | 1|none | 0|acc |0.4030|± | 0.009| | | |none | 0|acc_norm |0.4034|± | 0.009| |pubmedqa | 1|none | 0|acc |0.6860|± |0.0208| |boolq | 2|none | 0|acc |0.8401|± |0.0064| ``` ## Original (VLLM) Evaluations ``` | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |--------------|------:|----------------|-----:|-----------|-----:|---|-----:| |gsm8k | 3|strict-match | 5|exact_match|0.7528|± |0.0119| | | |flexible-extract| 5|exact_match|0.7521|± |0.0119| |hellaswag | 1|none | 10|acc |0.8117|± |0.0039| | | |none | 10|acc_norm |0.9167|± |0.0028| |winogrande | 1|none | 5|acc |0.8682|± |0.0095| |mmlu | N/A|none | 0|acc |0.6448|± |0.0038| |arc_challenge | 1|none | 25|acc |0.7688|± |0.0123| | | |none | 25|acc_norm |0.7730|± |0.0122| |truthfulqa_mc2| 2|none | 0|acc |0.7895|± |0.0133| |mathqa | 1|none | 0|acc |0.4000|± | 0.009| | | |none | 0|acc_norm |0.4003|± | 0.009| |pubmedqa | 1|none | 0|acc |0.6680|± |0.0211| |boolq | 2|none | 0|acc |0.8346|± |0.0065| ``` ## UNA Details Only MLP were Uniformed leaving room for further optimisations. You should be able to perform a SFT+DPO again on this model at moderate speeds. 1e-4/2e-5/etc.
{"library_name": "transformers", "license": "afl-3.0", "tags": ["UNA", "juanako"]}
dataset
null
526
ntc-ai/SDXL-LoRA-slider.cosplay-outfit
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2023-12-15T01:28:39Z
2024-02-06T00:33:02+00:00
68
2
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/cosplay outfit_17_3.0.png widget: - text: cosplay outfit output: url: images/cosplay outfit_17_3.0.png - text: cosplay outfit output: url: images/cosplay outfit_19_3.0.png - text: cosplay outfit output: url: images/cosplay outfit_20_3.0.png - text: cosplay outfit output: url: images/cosplay outfit_21_3.0.png - text: cosplay outfit output: url: images/cosplay outfit_22_3.0.png inference: false instance_prompt: cosplay outfit --- # ntcai.xyz slider - cosplay outfit (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/cosplay outfit_17_-3.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_17_0.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_17_3.0.png" width=256 height=256 /> | | <img src="images/cosplay outfit_19_-3.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_19_0.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_19_3.0.png" width=256 height=256 /> | | <img src="images/cosplay outfit_20_-3.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_20_0.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/a66ce096-1781-46fe-8944-7c7a1a03714c](https://sliders.ntcai.xyz/sliders/app/loras/a66ce096-1781-46fe-8944-7c7a1a03714c) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` cosplay outfit ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.cosplay-outfit', weight_name='cosplay outfit.safetensors', adapter_name="cosplay outfit") # Activate the LoRA pipe.set_adapters(["cosplay outfit"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, cosplay outfit" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
Non_BioNLP
# ntcai.xyz slider - cosplay outfit (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/cosplay outfit_17_-3.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_17_0.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_17_3.0.png" width=256 height=256 /> | | <img src="images/cosplay outfit_19_-3.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_19_0.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_19_3.0.png" width=256 height=256 /> | | <img src="images/cosplay outfit_20_-3.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_20_0.0.png" width=256 height=256 /> | <img src="images/cosplay outfit_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/a66ce096-1781-46fe-8944-7c7a1a03714c](https://sliders.ntcai.xyz/sliders/app/loras/a66ce096-1781-46fe-8944-7c7a1a03714c) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` cosplay outfit ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.cosplay-outfit', weight_name='cosplay outfit.safetensors', adapter_name="cosplay outfit") # Activate the LoRA pipe.set_adapters(["cosplay outfit"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, cosplay outfit" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
{"base_model": "stabilityai/stable-diffusion-xl-base-1.0", "language": ["en"], "license": "mit", "tags": ["text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "diffusers"], "thumbnail": "images/cosplay outfit_17_3.0.png", "widget": [{"text": "cosplay outfit", "output": {"url": "images/cosplay outfit_17_3.0.png"}}, {"text": "cosplay outfit", "output": {"url": "images/cosplay outfit_19_3.0.png"}}, {"text": "cosplay outfit", "output": {"url": "images/cosplay outfit_20_3.0.png"}}, {"text": "cosplay outfit", "output": {"url": "images/cosplay outfit_21_3.0.png"}}, {"text": "cosplay outfit", "output": {"url": "images/cosplay outfit_22_3.0.png"}}], "inference": false, "instance_prompt": "cosplay outfit"}
dataset
null
527
twadada/llm_mse
twadada
null
[ "mteb", "model-index", "region:us" ]
2024-11-19T09:53:24Z
2024-11-19T09:53:33+00:00
0
0
--- tags: - mteb model-index: - name: no_model_name_available results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 68.60569715142428 - type: ap value: 19.05710055685074 - type: ap_weighted value: 19.05710055685074 - type: f1 value: 56.581673345537695 - type: f1_weighted value: 74.61143344921274 - type: main_score value: 68.60569715142428 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 68.56716417910447 - type: ap value: 31.32344301280815 - type: ap_weighted value: 31.32344301280815 - type: f1 value: 62.570662383384025 - type: f1_weighted value: 71.61789541976941 - type: main_score value: 68.56716417910447 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 63.276231263383295 - type: ap value: 77.029702826753 - type: ap_weighted value: 77.029702826753 - type: f1 value: 61.38234936043525 - type: f1_weighted value: 64.54688276108833 - type: main_score value: 63.276231263383295 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 44.368308351177724 - type: ap value: 10.954835146791183 - type: ap_weighted value: 10.954835146791183 - type: f1 value: 36.62906436161906 - type: f1_weighted value: 51.69895802800691 - type: main_score value: 44.368308351177724 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 36.808 - type: f1 value: 34.68301166695203 - type: f1_weighted value: 34.68301166695202 - type: main_score value: 36.808 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 27.057999999999993 - type: f1 value: 26.24275950859653 - type: f1_weighted value: 26.242759508596524 - type: main_score value: 27.057999999999993 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 31.064000000000004 - type: f1 value: 29.708079352003708 - type: f1_weighted value: 29.7080793520037 - type: main_score value: 31.064000000000004 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 29.43 - type: f1 value: 27.94855548400926 - type: f1_weighted value: 27.94855548400926 - type: main_score value: 29.43 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 20.787999999999997 - type: f1 value: 15.135022040282188 - type: f1_weighted value: 15.135022040282188 - type: main_score value: 20.787999999999997 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 21.914 - type: f1 value: 15.895956878609303 - type: f1_weighted value: 15.895956878609303 - type: main_score value: 21.914 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S (default) type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 19.890899955689118 - type: v_measure value: 19.890899955689118 - type: v_measure_std value: 15.234197799081727 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions (default) type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: main_score value: 49.123206371254746 - type: map value: 49.123206371254746 - type: mrr value: 62.31862551114629 - type: nAUC_map_diff1 value: 10.382490924755208 - type: nAUC_map_max value: 18.748869416562293 - type: nAUC_map_std value: 2.5774869725944383 - type: nAUC_mrr_diff1 value: 13.422210021656673 - type: nAUC_mrr_max value: 24.878571083763035 - type: nAUC_mrr_std value: -0.41050314967328677 - task: type: STS dataset: name: MTEB BIOSSES (default) type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 54.66661709953381 - type: cosine_spearman value: 61.90442258245585 - type: euclidean_pearson value: 57.802209299685984 - type: euclidean_spearman value: 61.90442258245585 - type: main_score value: 61.90442258245585 - type: manhattan_pearson value: 58.05739954223122 - type: manhattan_spearman value: 62.10683683315609 - type: pearson value: 54.66661709953381 - type: spearman value: 61.90442258245585 - task: type: Classification dataset: name: MTEB Banking77Classification (default) type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 50.75324675324676 - type: f1 value: 50.08833636657759 - type: f1_weighted value: 50.08833636657759 - type: main_score value: 50.75324675324676 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S (default) type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 19.543768231624547 - type: v_measure value: 19.543768231624547 - type: v_measure_std value: 0.8448669358199523 - task: type: Classification dataset: name: MTEB EmotionClassification (default) type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 31.465 - type: f1 value: 27.518410158786278 - type: f1_weighted value: 32.729446691751605 - type: main_score value: 31.465 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 83.66393068855447 - type: f1 value: 83.02273407562654 - type: f1_weighted value: 83.66877159114159 - type: main_score value: 83.66393068855447 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 63.97013243167089 - type: f1 value: 60.85033241575268 - type: f1_weighted value: 63.82115556806192 - type: main_score value: 63.97013243167089 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 62.37491661107405 - type: f1 value: 60.94290925815502 - type: f1_weighted value: 62.10717598146462 - type: main_score value: 62.37491661107405 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 62.95020357031006 - type: f1 value: 60.758971765144224 - type: f1_weighted value: 63.42247920372272 - type: main_score value: 62.95020357031006 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 12.613840086052347 - type: f1 value: 6.5750442135283 - type: f1_weighted value: 6.53244904380679 - type: main_score value: 12.613840086052347 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 14.759493670886076 - type: f1 value: 8.12843236923924 - type: f1_weighted value: 8.793246140296032 - type: main_score value: 14.759493670886076 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 49.43228454172367 - type: f1 value: 34.55112542095168 - type: f1_weighted value: 52.614378484454974 - type: main_score value: 49.43228454172367 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 39.01662440123979 - type: f1 value: 23.82791663064076 - type: f1_weighted value: 43.645398141967966 - type: main_score value: 39.01662440123979 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 37.11140760507005 - type: f1 value: 21.935352507756388 - type: f1_weighted value: 39.321275372065685 - type: main_score value: 37.11140760507005 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 33.7770122142186 - type: f1 value: 22.220964590376273 - type: f1_weighted value: 37.485286173160986 - type: main_score value: 33.7770122142186 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 5.453567586948727 - type: f1 value: 0.7075326300577311 - type: f1_weighted value: 2.3858630958577836 - type: main_score value: 5.453567586948727 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 5.529837251356239 - type: f1 value: 1.2115090491792773 - type: f1_weighted value: 3.498070456864493 - type: main_score value: 5.529837251356239 - task: type: Classification dataset: name: MTEB MasakhaNEWSClassification (eng) type: mteb/masakhanews config: eng split: test revision: 18193f187b92da67168c655c9973a165ed9593dd metrics: - type: accuracy value: 64.5042194092827 - type: f1 value: 62.368592308141814 - type: f1_weighted value: 63.90417453510408 - type: main_score value: 64.5042194092827 - task: type: Clustering dataset: name: MTEB MasakhaNEWSClusteringS2S (eng) type: masakhane/masakhanews config: eng split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: main_score value: 24.84564500417387 - type: v_measure value: 24.84564500417387 - type: v_measure_std value: 22.286703004465615 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 2.219233355749832 - type: f1 value: 0.1932870095686131 - type: f1_weighted value: 0.251235487639337 - type: main_score value: 2.219233355749832 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 1.2844653665097512 - type: f1 value: 0.18710410412943543 - type: f1_weighted value: 0.2739907174462001 - type: main_score value: 1.2844653665097512 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 32.982515131136516 - type: f1 value: 29.879476335364973 - type: f1_weighted value: 32.59262194412672 - type: main_score value: 32.982515131136516 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 2.2125084061869535 - type: f1 value: 0.5736320148349802 - type: f1_weighted value: 0.7371018417507617 - type: main_score value: 2.2125084061869535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 27.165433759246802 - type: f1 value: 25.68362075943369 - type: f1_weighted value: 25.71202157696122 - type: main_score value: 27.165433759246802 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 10.665770006724948 - type: f1 value: 5.114611283180833 - type: f1_weighted value: 7.526848175428076 - type: main_score value: 10.665770006724948 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 31.661062542030933 - type: f1 value: 31.298953203005986 - type: f1_weighted value: 30.183076634560134 - type: main_score value: 31.661062542030933 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 27.995965030262276 - type: f1 value: 25.849404737727465 - type: f1_weighted value: 26.922571545761638 - type: main_score value: 27.995965030262276 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 36.73839946200404 - type: f1 value: 35.6799981256784 - type: f1_weighted value: 35.65583276626004 - type: main_score value: 36.73839946200404 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 1.1062542030934768 - type: f1 value: 0.3829753109058956 - type: f1_weighted value: 0.42459533841173747 - type: main_score value: 1.1062542030934768 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 2.3604572965702753 - type: f1 value: 0.9096234324517042 - type: f1_weighted value: 0.9394595549389105 - type: main_score value: 2.3604572965702753 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 32.68997982515132 - type: f1 value: 29.986572248952147 - type: f1_weighted value: 32.22231191644284 - type: main_score value: 32.68997982515132 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 36.70477471418964 - type: f1 value: 33.50288534893127 - type: f1_weighted value: 34.846130335010265 - type: main_score value: 36.70477471418964 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 2.96906523201076 - type: f1 value: 0.7797856721437596 - type: f1_weighted value: 0.6236996914225641 - type: main_score value: 2.96906523201076 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 31.01882985877606 - type: f1 value: 29.527835951539323 - type: f1_weighted value: 30.66568514409952 - type: main_score value: 31.01882985877606 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 3.2178883658372555 - type: f1 value: 0.5240681583697773 - type: f1_weighted value: 0.9198214868347652 - type: main_score value: 3.2178883658372555 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 37.11499663752522 - type: f1 value: 36.36396173693096 - type: f1_weighted value: 35.50337761684995 - type: main_score value: 37.11499663752522 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 26.7350369872226 - type: f1 value: 25.812896452146234 - type: f1_weighted value: 26.2226872478251 - type: main_score value: 26.7350369872226 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 34.97982515131137 - type: f1 value: 32.92316320729933 - type: f1_weighted value: 33.68424734170567 - type: main_score value: 34.97982515131137 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 1.546738399462004 - type: f1 value: 0.6491922803798055 - type: f1_weighted value: 0.36416059882684426 - type: main_score value: 1.546738399462004 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 25.16476126429052 - type: f1 value: 23.67218773633549 - type: f1_weighted value: 23.6371559019449 - type: main_score value: 25.16476126429052 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 33.79959650302623 - type: f1 value: 32.51301308582213 - type: f1_weighted value: 32.526479564865305 - type: main_score value: 33.79959650302623 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 29.49226630800269 - type: f1 value: 28.94940260858102 - type: f1_weighted value: 28.63948113059682 - type: main_score value: 29.49226630800269 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 1.6778749159381305 - type: f1 value: 0.9744693901937154 - type: f1_weighted value: 0.691053805319416 - type: main_score value: 1.6778749159381305 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 30.114324142568925 - type: f1 value: 29.430743039242152 - type: f1_weighted value: 29.04299307313548 - type: main_score value: 30.114324142568925 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 2.797579018157364 - type: f1 value: 1.144033688398988 - type: f1_weighted value: 1.0884768126381035 - type: main_score value: 2.797579018157364 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 32.54539340954942 - type: f1 value: 31.521139537198316 - type: f1_weighted value: 31.530360085026093 - type: main_score value: 32.54539340954942 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 30.783456624075324 - type: f1 value: 29.604725003907866 - type: f1_weighted value: 29.685617024715732 - type: main_score value: 30.783456624075324 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 1.8426361802286482 - type: f1 value: 0.33542666799543247 - type: f1_weighted value: 0.2711276986927232 - type: main_score value: 1.8426361802286482 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 30.178211163416268 - type: f1 value: 29.37132431463145 - type: f1_weighted value: 29.494452777308833 - type: main_score value: 30.178211163416268 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 2.649630127774042 - type: f1 value: 1.7505098874789995 - type: f1_weighted value: 1.4639682364635813 - type: main_score value: 2.649630127774042 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 4.468728984532616 - type: f1 value: 2.090461109042727 - type: f1_weighted value: 2.7853674561791295 - type: main_score value: 4.468728984532616 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 33.27168796234029 - type: f1 value: 32.00481372908824 - type: f1_weighted value: 32.159041657111764 - type: main_score value: 33.27168796234029 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 0.749831876260928 - type: f1 value: 0.11432947296104061 - type: f1_weighted value: 0.0764038848837725 - type: main_score value: 0.749831876260928 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 32.125084061869536 - type: f1 value: 30.154247947358247 - type: f1_weighted value: 30.87288096360447 - type: main_score value: 32.125084061869536 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 1.617350369872226 - type: f1 value: 0.9905489260231543 - type: f1_weighted value: 0.7953294182207199 - type: main_score value: 1.617350369872226 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 3.806321452589106 - type: f1 value: 1.9196646149428953 - type: f1_weighted value: 1.6645242984042585 - type: main_score value: 3.806321452589106 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 35.77673167451245 - type: f1 value: 33.18041618186975 - type: f1_weighted value: 35.833046113268786 - type: main_score value: 35.77673167451245 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 53.4969737726967 - type: f1 value: 51.88341293441036 - type: f1_weighted value: 53.20514357568628 - type: main_score value: 53.4969737726967 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 4.784801613987895 - type: f1 value: 1.969274839533907 - type: f1_weighted value: 2.4942212470758016 - type: main_score value: 4.784801613987895 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 31.069266980497645 - type: f1 value: 31.48265427665997 - type: f1_weighted value: 30.3696521492686 - type: main_score value: 31.069266980497645 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 1.9670477471418968 - type: f1 value: 0.45697365831527426 - type: f1_weighted value: 0.2853963696007572 - type: main_score value: 1.9670477471418968 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 2.1015467383994615 - type: f1 value: 0.5210481229705188 - type: f1_weighted value: 0.5924944385210995 - type: main_score value: 2.1015467383994615 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 31.318090114324143 - type: f1 value: 30.05810538658039 - type: f1_weighted value: 30.360376696442504 - type: main_score value: 31.318090114324143 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 19.078681909885677 - type: f1 value: 18.360818504390085 - type: f1_weighted value: 18.15470646878023 - type: main_score value: 19.078681909885677 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 35.564895763281775 - type: f1 value: 35.587064959631185 - type: f1_weighted value: 34.4349962874478 - type: main_score value: 35.564895763281775 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 3.2111634162743776 - type: f1 value: 1.4524341197394974 - type: f1_weighted value: 1.3395307357797508 - type: main_score value: 3.2111634162743776 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 33.99798251513114 - type: f1 value: 32.69281167233965 - type: f1_weighted value: 32.22827641327085 - type: main_score value: 33.99798251513114 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 29.660390047074646 - type: f1 value: 28.090771859451536 - type: f1_weighted value: 29.50058846849659 - type: main_score value: 29.660390047074646 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 2.118359112306658 - type: f1 value: 1.0794128790274702 - type: f1_weighted value: 1.0149237288074577 - type: main_score value: 2.118359112306658 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 2.242770679219906 - type: f1 value: 0.6772746623940161 - type: f1_weighted value: 0.5935033259869644 - type: main_score value: 2.242770679219906 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 4.7679892400807 - type: f1 value: 0.6958635242707644 - type: f1_weighted value: 0.7383116540131966 - type: main_score value: 4.7679892400807 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 4.599865501008742 - type: f1 value: 0.8680195452904774 - type: f1_weighted value: 1.3022709162006496 - type: main_score value: 4.599865501008742 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 45.80026899798251 - type: f1 value: 42.09162084904855 - type: f1_weighted value: 45.937899984554896 - type: main_score value: 45.80026899798251 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 7.935440484196368 - type: f1 value: 2.054473625082069 - type: f1_weighted value: 2.331310360179839 - type: main_score value: 7.935440484196368 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 39.525891055817084 - type: f1 value: 35.64315129468117 - type: f1_weighted value: 38.873288696604064 - type: main_score value: 39.525891055817084 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 16.822461331540016 - type: f1 value: 9.528868617590787 - type: f1_weighted value: 12.052833175443745 - type: main_score value: 16.822461331540016 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 41.44922663080027 - type: f1 value: 38.29694592816531 - type: f1_weighted value: 40.494682049238065 - type: main_score value: 41.44922663080027 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 36.37525218560861 - type: f1 value: 32.742079476295714 - type: f1_weighted value: 36.41453434396975 - type: main_score value: 36.37525218560861 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 43.79959650302623 - type: f1 value: 41.74604131799107 - type: f1_weighted value: 41.89697637112924 - type: main_score value: 43.79959650302623 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 6.2844653665097505 - type: f1 value: 1.1363404526147562 - type: f1_weighted value: 1.507290141564863 - type: main_score value: 6.2844653665097505 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 5.406859448554135 - type: f1 value: 2.560817113707556 - type: f1_weighted value: 2.408341973383642 - type: main_score value: 5.406859448554135 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 43.08002689979825 - type: f1 value: 39.31491179400749 - type: f1_weighted value: 42.387701010649735 - type: main_score value: 43.08002689979825 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 46.30127774041695 - type: f1 value: 43.177548916667774 - type: f1_weighted value: 46.02641155529322 - type: main_score value: 46.30127774041695 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 5.968392737054471 - type: f1 value: 1.558644350101979 - type: f1_weighted value: 2.184277748991485 - type: main_score value: 5.968392737054471 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 39.08204438466712 - type: f1 value: 37.19465931596499 - type: f1_weighted value: 37.92508333682256 - type: main_score value: 39.08204438466712 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 5.712844653665098 - type: f1 value: 2.3513952725160445 - type: f1_weighted value: 2.591355133449796 - type: main_score value: 5.712844653665098 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 44.79488903833221 - type: f1 value: 42.216456011086514 - type: f1_weighted value: 43.63836497077992 - type: main_score value: 44.79488903833221 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 38.91055817081372 - type: f1 value: 36.658118919837705 - type: f1_weighted value: 38.285047658406185 - type: main_score value: 38.91055817081372 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 42.82447881640888 - type: f1 value: 39.71183576580626 - type: f1_weighted value: 42.99955794883917 - type: main_score value: 42.82447881640888 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 6.9569603227975785 - type: f1 value: 1.3249507928345723 - type: f1_weighted value: 2.1526435195273512 - type: main_score value: 6.9569603227975785 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 35.47747141896436 - type: f1 value: 32.68368628376791 - type: f1_weighted value: 34.486227854192805 - type: main_score value: 35.47747141896436 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 44.20645595158036 - type: f1 value: 40.46275245484104 - type: f1_weighted value: 43.07451372640555 - type: main_score value: 44.20645595158036 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 37.565568258238066 - type: f1 value: 34.34228491467635 - type: f1_weighted value: 36.715470304700304 - type: main_score value: 37.565568258238066 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 4.428379287155346 - type: f1 value: 2.118733356397359 - type: f1_weighted value: 1.6597464958411214 - type: main_score value: 4.428379287155346 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 34.67720242098184 - type: f1 value: 31.648714845929625 - type: f1_weighted value: 34.62782835061803 - type: main_score value: 34.67720242098184 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 8.006052454606591 - type: f1 value: 2.1079480174137237 - type: f1_weighted value: 2.1631918405037758 - type: main_score value: 8.006052454606591 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 39.22999327505043 - type: f1 value: 37.16721131021293 - type: f1_weighted value: 39.397613949853735 - type: main_score value: 39.22999327505043 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 41.55010087424344 - type: f1 value: 38.32223910141539 - type: f1_weighted value: 41.72498846160742 - type: main_score value: 41.55010087424344 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 3.0363147276395432 - type: f1 value: 0.4951111891349476 - type: f1_weighted value: 0.4456347917226148 - type: main_score value: 3.0363147276395432 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 42.84801613987895 - type: f1 value: 40.77209890733345 - type: f1_weighted value: 42.29511181907119 - type: main_score value: 42.84801613987895 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 8.140551445864155 - type: f1 value: 3.088889182397252 - type: f1_weighted value: 3.382529160821981 - type: main_score value: 8.140551445864155 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 10.063887020847343 - type: f1 value: 4.3953906298120415 - type: f1_weighted value: 6.1030360630370675 - type: main_score value: 10.063887020847343 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 40.86079354404843 - type: f1 value: 38.12848430733589 - type: f1_weighted value: 39.61399818207077 - type: main_score value: 40.86079354404843 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 3.1809011432414254 - type: f1 value: 0.6663078501713696 - type: f1_weighted value: 0.6161504543566888 - type: main_score value: 3.1809011432414254 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 38.991257565568255 - type: f1 value: 35.8711142606479 - type: f1_weighted value: 39.27058914996822 - type: main_score value: 38.991257565568255 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 7.5117686617350365 - type: f1 value: 2.730333236177 - type: f1_weighted value: 2.476626926704587 - type: main_score value: 7.5117686617350365 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 8.32548755884331 - type: f1 value: 3.0996007067176996 - type: f1_weighted value: 3.0676442629069967 - type: main_score value: 8.32548755884331 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 47.57901815736382 - type: f1 value: 43.47365742357309 - type: f1_weighted value: 47.581511497169764 - type: main_score value: 47.57901815736382 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 63.84330867518494 - type: f1 value: 61.80623184800081 - type: f1_weighted value: 63.66823920852459 - type: main_score value: 63.84330867518494 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 10.060524546065905 - type: f1 value: 4.697788726183898 - type: f1_weighted value: 8.0688374518688 - type: main_score value: 10.060524546065905 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 39.02824478816409 - type: f1 value: 37.25613303442762 - type: f1_weighted value: 38.22861284484312 - type: main_score value: 39.02824478816409 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 5.0638870208473445 - type: f1 value: 1.0753261358276471 - type: f1_weighted value: 1.0802883978030118 - type: main_score value: 5.0638870208473445 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 6.321452589105584 - type: f1 value: 1.5829376262790664 - type: f1_weighted value: 2.232184358298365 - type: main_score value: 6.321452589105584 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 37.21923335574983 - type: f1 value: 36.993268170979576 - type: f1_weighted value: 35.67645464322424 - type: main_score value: 37.21923335574983 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 25.934767989240076 - type: f1 value: 24.616943306685748 - type: f1_weighted value: 24.74309285569417 - type: main_score value: 25.934767989240076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 44.69401479488904 - type: f1 value: 42.41464498194295 - type: f1_weighted value: 44.26134318268762 - type: main_score value: 44.69401479488904 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 8.47343644922663 - type: f1 value: 2.9718553546241506 - type: f1_weighted value: 3.9449930229420818 - type: main_score value: 8.47343644922663 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 42.92199058507061 - type: f1 value: 40.00185738475351 - type: f1_weighted value: 42.53838435113089 - type: main_score value: 42.92199058507061 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 36.856086079354405 - type: f1 value: 35.85809216604705 - type: f1_weighted value: 36.503220372495356 - type: main_score value: 36.856086079354405 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 7.427706792199058 - type: f1 value: 2.355649221281433 - type: f1_weighted value: 2.3635737714890097 - type: main_score value: 7.427706792199058 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 7.2494956287827845 - type: f1 value: 3.0267066892790786 - type: f1_weighted value: 2.228737132597149 - type: main_score value: 7.2494956287827845 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S (default) type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 22.3149940028344 - type: v_measure value: 22.3149940028344 - type: v_measure_std value: 1.184495521159966 - task: type: Reranking dataset: name: MTEB MindSmallReranking (default) type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: main_score value: 26.874241404290856 - type: map value: 26.874241404290856 - type: mrr value: 27.50127374810197 - type: nAUC_map_diff1 value: 20.72193125860396 - type: nAUC_map_max value: -21.181361650744908 - type: nAUC_map_std value: -21.136143423992458 - type: nAUC_mrr_diff1 value: 18.217458666186445 - type: nAUC_mrr_max value: -14.657975701378914 - type: nAUC_mrr_std value: -17.948245474413323 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (de) type: GEM/opusparcus config: de split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cosine_accuracy value: 99.90448901623687 - type: cosine_accuracy_threshold value: 32.084010045061795 - type: cosine_ap value: 100.0 - type: cosine_f1 value: 99.95222169135212 - type: cosine_f1_threshold value: 32.084010045061795 - type: cosine_precision value: 100.0 - type: cosine_recall value: 99.90448901623687 - type: dot_accuracy value: 99.90448901623687 - type: dot_accuracy_threshold value: 14.194202811836867 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95222169135212 - type: dot_f1_threshold value: 14.194202811836867 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90448901623687 - type: euclidean_accuracy value: 99.90448901623687 - type: euclidean_accuracy_threshold value: 116.50380181599331 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95222169135212 - type: euclidean_f1_threshold value: 116.50380181599331 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90448901623687 - type: main_score value: 100.0 - type: manhattan_accuracy value: 99.90448901623687 - type: manhattan_accuracy_threshold value: 5994.10849076798 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95222169135212 - type: manhattan_f1_threshold value: 5994.10849076798 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90448901623687 - type: max_accuracy value: 99.90448901623687 - type: max_ap value: 100.0 - type: max_f1 value: 99.95222169135212 - type: max_precision value: 100.0 - type: max_recall value: 99.90448901623687 - type: similarity_accuracy value: 99.90448901623687 - type: similarity_accuracy_threshold value: 32.084010045061795 - type: similarity_ap value: 100.0 - type: similarity_f1 value: 99.95222169135212 - type: similarity_f1_threshold value: 32.084010045061795 - type: similarity_precision value: 100.0 - type: similarity_recall value: 99.90448901623687 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (en) type: GEM/opusparcus config: en split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cosine_accuracy value: 99.89816700610999 - type: cosine_accuracy_threshold value: 40.08682069986206 - type: cosine_ap value: 100.0 - type: cosine_f1 value: 99.9490575649516 - type: cosine_f1_threshold value: 40.08682069986206 - type: cosine_precision value: 100.0 - type: cosine_recall value: 99.89816700610999 - type: dot_accuracy value: 99.89816700610999 - type: dot_accuracy_threshold value: 40.08682068226012 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.9490575649516 - type: dot_f1_threshold value: 40.08682068226012 - type: dot_precision value: 100.0 - type: dot_recall value: 99.89816700610999 - type: euclidean_accuracy value: 99.89816700610999 - type: euclidean_accuracy_threshold value: 109.46519126990579 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.9490575649516 - type: euclidean_f1_threshold value: 109.46519126990579 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.89816700610999 - type: main_score value: 100.0 - type: manhattan_accuracy value: 99.89816700610999 - type: manhattan_accuracy_threshold value: 5586.837509625999 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.9490575649516 - type: manhattan_f1_threshold value: 5586.837509625999 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.89816700610999 - type: max_accuracy value: 99.89816700610999 - type: max_ap value: 100.0 - type: max_f1 value: 99.9490575649516 - type: max_precision value: 100.0 - type: max_recall value: 99.89816700610999 - type: similarity_accuracy value: 99.89816700610999 - type: similarity_accuracy_threshold value: 40.08682069986206 - type: similarity_ap value: 100.0 - type: similarity_f1 value: 99.9490575649516 - type: similarity_f1_threshold value: 40.08682069986206 - type: similarity_precision value: 100.0 - type: similarity_recall value: 99.89816700610999 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (fi) type: GEM/opusparcus config: fi split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cosine_accuracy value: 99.89561586638831 - type: cosine_accuracy_threshold value: -22.557142663724193 - type: cosine_ap value: 99.99999999999999 - type: cosine_f1 value: 99.94778067885117 - type: cosine_f1_threshold value: -22.557142663724193 - type: cosine_precision value: 100.0 - type: cosine_recall value: 99.89561586638831 - type: dot_accuracy value: 99.89561586638831 - type: dot_accuracy_threshold value: -22.55714265463469 - type: dot_ap value: 99.99999999999999 - type: dot_f1 value: 99.94778067885117 - type: dot_f1_threshold value: -22.55714265463469 - type: dot_precision value: 100.0 - type: dot_recall value: 99.89561586638831 - type: euclidean_accuracy value: 99.89561586638831 - type: euclidean_accuracy_threshold value: 156.13722151560276 - type: euclidean_ap value: 99.99999999999999 - type: euclidean_f1 value: 99.94778067885117 - type: euclidean_f1_threshold value: 156.13722151560276 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.89561586638831 - type: main_score value: 99.99999999999999 - type: manhattan_accuracy value: 99.89561586638831 - type: manhattan_accuracy_threshold value: 8123.721240822417 - type: manhattan_ap value: 99.99999999999999 - type: manhattan_f1 value: 99.94778067885117 - type: manhattan_f1_threshold value: 8123.721240822417 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.89561586638831 - type: max_accuracy value: 99.89561586638831 - type: max_ap value: 99.99999999999999 - type: max_f1 value: 99.94778067885117 - type: max_precision value: 100.0 - type: max_recall value: 99.89561586638831 - type: similarity_accuracy value: 99.89561586638831 - type: similarity_accuracy_threshold value: -22.557142663724193 - type: similarity_ap value: 99.99999999999999 - type: similarity_f1 value: 99.94778067885117 - type: similarity_f1_threshold value: -22.557142663724193 - type: similarity_precision value: 100.0 - type: similarity_recall value: 99.89561586638831 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (fr) type: GEM/opusparcus config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cosine_accuracy value: 99.90069513406156 - type: cosine_accuracy_threshold value: 4.276752354307001 - type: cosine_ap value: 100.0 - type: cosine_f1 value: 99.95032290114257 - type: cosine_f1_threshold value: 4.276752354307001 - type: cosine_precision value: 100.0 - type: cosine_recall value: 99.90069513406156 - type: dot_accuracy value: 99.90069513406156 - type: dot_accuracy_threshold value: 4.276752351391649 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95032290114257 - type: dot_f1_threshold value: 4.276752351391649 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90069513406156 - type: euclidean_accuracy value: 99.90069513406156 - type: euclidean_accuracy_threshold value: 136.9020176878726 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95032290114257 - type: euclidean_f1_threshold value: 136.9020176878726 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90069513406156 - type: main_score value: 100.0 - type: manhattan_accuracy value: 99.90069513406156 - type: manhattan_accuracy_threshold value: 7063.200709566871 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95032290114257 - type: manhattan_f1_threshold value: 7063.200709566871 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90069513406156 - type: max_accuracy value: 99.90069513406156 - type: max_ap value: 100.0 - type: max_f1 value: 99.95032290114257 - type: max_precision value: 100.0 - type: max_recall value: 99.90069513406156 - type: similarity_accuracy value: 99.90069513406156 - type: similarity_accuracy_threshold value: 4.276752354307001 - type: similarity_ap value: 100.0 - type: similarity_f1 value: 99.95032290114257 - type: similarity_f1_threshold value: 4.276752354307001 - type: similarity_precision value: 100.0 - type: similarity_recall value: 99.90069513406156 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (ru) type: GEM/opusparcus config: ru split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cosine_accuracy value: 99.90636704119851 - type: cosine_accuracy_threshold value: 7.132103928293631 - type: cosine_ap value: 100.0 - type: cosine_f1 value: 99.95316159250585 - type: cosine_f1_threshold value: 7.132103928293631 - type: cosine_precision value: 100.0 - type: cosine_recall value: 99.90636704119851 - type: dot_accuracy value: 99.90636704119851 - type: dot_accuracy_threshold value: -13.447421954803113 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95316159250585 - type: dot_f1_threshold value: -13.447421954803113 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90636704119851 - type: euclidean_accuracy value: 99.90636704119851 - type: euclidean_accuracy_threshold value: 133.89453353967028 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95316159250585 - type: euclidean_f1_threshold value: 133.89453353967028 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90636704119851 - type: main_score value: 100.0 - type: manhattan_accuracy value: 99.90636704119851 - type: manhattan_accuracy_threshold value: 7020.097656622158 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95316159250585 - type: manhattan_f1_threshold value: 7020.097656622158 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90636704119851 - type: max_accuracy value: 99.90636704119851 - type: max_ap value: 100.0 - type: max_f1 value: 99.95316159250585 - type: max_precision value: 100.0 - type: max_recall value: 99.90636704119851 - type: similarity_accuracy value: 99.90636704119851 - type: similarity_accuracy_threshold value: 7.132103928293631 - type: similarity_ap value: 100.0 - type: similarity_f1 value: 99.95316159250585 - type: similarity_f1_threshold value: 7.132103928293631 - type: similarity_precision value: 100.0 - type: similarity_recall value: 99.90636704119851 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (sv) type: GEM/opusparcus config: sv split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cosine_accuracy value: 99.89440337909187 - type: cosine_accuracy_threshold value: 0.2529676444121498 - type: cosine_ap value: 100.0 - type: cosine_f1 value: 99.9471737982039 - type: cosine_f1_threshold value: 0.2529676444121498 - type: cosine_precision value: 100.0 - type: cosine_recall value: 99.89440337909187 - type: dot_accuracy value: 99.89440337909187 - type: dot_accuracy_threshold value: -13.939213532311562 - type: dot_ap value: 99.99999999999999 - type: dot_f1 value: 99.9471737982039 - type: dot_f1_threshold value: -13.939213532311562 - type: dot_precision value: 100.0 - type: dot_recall value: 99.89440337909187 - type: euclidean_accuracy value: 99.89440337909187 - type: euclidean_accuracy_threshold value: 139.80163412046423 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.9471737982039 - type: euclidean_f1_threshold value: 139.80163412046423 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.89440337909187 - type: main_score value: 100.0 - type: manhattan_accuracy value: 99.89440337909187 - type: manhattan_accuracy_threshold value: 7259.639697084279 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.9471737982039 - type: manhattan_f1_threshold value: 7259.639697084279 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.89440337909187 - type: max_accuracy value: 99.89440337909187 - type: max_ap value: 100.0 - type: max_f1 value: 99.9471737982039 - type: max_precision value: 100.0 - type: max_recall value: 99.89440337909187 - type: similarity_accuracy value: 99.89440337909187 - type: similarity_accuracy_threshold value: 0.2529676444121498 - type: similarity_ap value: 100.0 - type: similarity_f1 value: 99.9471737982039 - type: similarity_f1_threshold value: 0.2529676444121498 - type: similarity_precision value: 100.0 - type: similarity_recall value: 99.89440337909187 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval (default) type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: main_score value: 68.73 - type: map_at_1 value: 53.492 - type: map_at_10 value: 64.086 - type: map_at_100 value: 64.832 - type: map_at_1000 value: 64.88199999999999 - type: map_at_20 value: 64.537 - type: map_at_3 value: 61.592 - type: map_at_5 value: 63.113 - type: mrr_at_1 value: 61.56 - type: mrr_at_10 value: 68.92823412698384 - type: mrr_at_100 value: 69.28307943909826 - type: mrr_at_1000 value: 69.30426854775237 - type: mrr_at_20 value: 69.15371761666225 - type: mrr_at_3 value: 67.3866666666664 - type: mrr_at_5 value: 68.36666666666618 - type: nauc_map_at_1000_diff1 value: 67.15642759814821 - type: nauc_map_at_1000_max value: 45.055780376792974 - type: nauc_map_at_1000_std value: -9.604334727421541 - type: nauc_map_at_100_diff1 value: 67.15173583169253 - type: nauc_map_at_100_max value: 45.04159938681548 - type: nauc_map_at_100_std value: -9.621105481487115 - type: nauc_map_at_10_diff1 value: 67.21904799567723 - type: nauc_map_at_10_max value: 44.64598524589752 - type: nauc_map_at_10_std value: -10.240236577363671 - type: nauc_map_at_1_diff1 value: 69.75325378909568 - type: nauc_map_at_1_max value: 39.57437605382559 - type: nauc_map_at_1_std value: -13.560013524667186 - type: nauc_map_at_20_diff1 value: 67.18218534766027 - type: nauc_map_at_20_max value: 44.898145457359036 - type: nauc_map_at_20_std value: -9.853291926035132 - type: nauc_map_at_3_diff1 value: 67.33579825697572 - type: nauc_map_at_3_max value: 43.434634746776254 - type: nauc_map_at_3_std value: -11.533963319404025 - type: nauc_map_at_5_diff1 value: 67.29212861119778 - type: nauc_map_at_5_max value: 44.149577446190584 - type: nauc_map_at_5_std value: -10.846590188540638 - type: nauc_mrr_at_1000_diff1 value: 68.43853101345768 - type: nauc_mrr_at_1000_max value: 48.23642231569019 - type: nauc_mrr_at_1000_std value: -8.164139622888774 - type: nauc_mrr_at_100_diff1 value: 68.43230932580869 - type: nauc_mrr_at_100_max value: 48.2366506280321 - type: nauc_mrr_at_100_std value: -8.15719155689163 - type: nauc_mrr_at_10_diff1 value: 68.40804119736147 - type: nauc_mrr_at_10_max value: 48.2668711810203 - type: nauc_mrr_at_10_std value: -8.28336977621905 - type: nauc_mrr_at_1_diff1 value: 70.8152113865952 - type: nauc_mrr_at_1_max value: 47.0802377233158 - type: nauc_mrr_at_1_std value: -11.195273246909617 - type: nauc_mrr_at_20_diff1 value: 68.42041452964153 - type: nauc_mrr_at_20_max value: 48.22983590171867 - type: nauc_mrr_at_20_std value: -8.20351261044932 - type: nauc_mrr_at_3_diff1 value: 68.44729044448252 - type: nauc_mrr_at_3_max value: 48.16311095038692 - type: nauc_mrr_at_3_std value: -8.78728757717942 - type: nauc_mrr_at_5_diff1 value: 68.38338463498374 - type: nauc_mrr_at_5_max value: 48.268101599089846 - type: nauc_mrr_at_5_std value: -8.477703392514476 - type: nauc_ndcg_at_1000_diff1 value: 66.78555692495787 - type: nauc_ndcg_at_1000_max value: 46.769939711081044 - type: nauc_ndcg_at_1000_std value: -6.218846919120327 - type: nauc_ndcg_at_100_diff1 value: 66.59364370802282 - type: nauc_ndcg_at_100_max value: 46.67887263322755 - type: nauc_ndcg_at_100_std value: -6.293812979200834 - type: nauc_ndcg_at_10_diff1 value: 66.52295231581002 - type: nauc_ndcg_at_10_max value: 46.11104447757736 - type: nauc_ndcg_at_10_std value: -8.188391638090097 - type: nauc_ndcg_at_1_diff1 value: 70.71581893884627 - type: nauc_ndcg_at_1_max value: 47.23054126591041 - type: nauc_ndcg_at_1_std value: -11.16636548054171 - type: nauc_ndcg_at_20_diff1 value: 66.55690608251255 - type: nauc_ndcg_at_20_max value: 46.32176620407243 - type: nauc_ndcg_at_20_std value: -7.290514968713207 - type: nauc_ndcg_at_3_diff1 value: 66.56467011058169 - type: nauc_ndcg_at_3_max value: 45.85553207058 - type: nauc_ndcg_at_3_std value: -9.625769901172513 - type: nauc_ndcg_at_5_diff1 value: 66.54844587662231 - type: nauc_ndcg_at_5_max value: 45.907121007430526 - type: nauc_ndcg_at_5_std value: -9.10244355196338 - type: nauc_precision_at_1000_diff1 value: -22.422463003175896 - type: nauc_precision_at_1000_max value: 4.7758645718637895 - type: nauc_precision_at_1000_std value: 17.79812492946632 - type: nauc_precision_at_100_diff1 value: -13.917229261278852 - type: nauc_precision_at_100_max value: 12.29030615723118 - type: nauc_precision_at_100_std value: 17.911028283874135 - type: nauc_precision_at_10_diff1 value: 6.590674643516733 - type: nauc_precision_at_10_max value: 24.19926960425754 - type: nauc_precision_at_10_std value: 10.06424163424373 - type: nauc_precision_at_1_diff1 value: 70.71581893884627 - type: nauc_precision_at_1_max value: 47.23054126591041 - type: nauc_precision_at_1_std value: -11.16636548054171 - type: nauc_precision_at_20_diff1 value: -2.483678970625915 - type: nauc_precision_at_20_max value: 19.72734209605925 - type: nauc_precision_at_20_std value: 14.191677013682849 - type: nauc_precision_at_3_diff1 value: 29.73727057888939 - type: nauc_precision_at_3_max value: 34.568730451871346 - type: nauc_precision_at_3_std value: 1.4403998107739213 - type: nauc_precision_at_5_diff1 value: 18.2542788731059 - type: nauc_precision_at_5_max value: 29.292888170520108 - type: nauc_precision_at_5_std value: 5.510094141692317 - type: nauc_recall_at_1000_diff1 value: 57.196928991569266 - type: nauc_recall_at_1000_max value: 46.153589753933446 - type: nauc_recall_at_1000_std value: 30.748423976943613 - type: nauc_recall_at_100_diff1 value: 57.976992158794886 - type: nauc_recall_at_100_max value: 45.79893337773414 - type: nauc_recall_at_100_std value: 13.253969225652396 - type: nauc_recall_at_10_diff1 value: 60.22299195797645 - type: nauc_recall_at_10_max value: 43.85065064759132 - type: nauc_recall_at_10_std value: -3.125491914491259 - type: nauc_recall_at_1_diff1 value: 69.75325378909568 - type: nauc_recall_at_1_max value: 39.57437605382559 - type: nauc_recall_at_1_std value: -13.560013524667186 - type: nauc_recall_at_20_diff1 value: 59.1680127262332 - type: nauc_recall_at_20_max value: 44.06962727874914 - type: nauc_recall_at_20_std value: 1.7610688570268762 - type: nauc_recall_at_3_diff1 value: 62.75286406178069 - type: nauc_recall_at_3_max value: 42.40300188251299 - type: nauc_recall_at_3_std value: -8.94270893049646 - type: nauc_recall_at_5_diff1 value: 61.57224817120582 - type: nauc_recall_at_5_max value: 43.2469875881082 - type: nauc_recall_at_5_std value: -6.712607605292967 - type: ndcg_at_1 value: 61.61 - type: ndcg_at_10 value: 68.73 - type: ndcg_at_100 value: 71.281 - type: ndcg_at_1000 value: 72.209 - type: ndcg_at_20 value: 69.862 - type: ndcg_at_3 value: 65.35 - type: ndcg_at_5 value: 67.099 - type: precision_at_1 value: 61.61 - type: precision_at_10 value: 10.295 - type: precision_at_100 value: 1.2670000000000001 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_20 value: 5.583 - type: precision_at_3 value: 28.157 - type: precision_at_5 value: 18.644 - type: recall_at_1 value: 53.492 - type: recall_at_10 value: 77.395 - type: recall_at_100 value: 87.822 - type: recall_at_1000 value: 94.039 - type: recall_at_20 value: 81.381 - type: recall_at_3 value: 67.657 - type: recall_at_5 value: 72.494 - task: type: Clustering dataset: name: MTEB RedditClustering (default) type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 22.18693423438157 - type: v_measure value: 22.18693423438157 - type: v_measure_std value: 3.362608784471836 - task: type: STS dataset: name: MTEB SICK-R (default) type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 74.25579384618342 - type: cosine_spearman value: 67.31903429944056 - type: euclidean_pearson value: 71.84781550612432 - type: euclidean_spearman value: 67.31913348808827 - type: main_score value: 67.31903429944056 - type: manhattan_pearson value: 71.93525335001107 - type: manhattan_spearman value: 67.44731252485444 - type: pearson value: 74.25579384618342 - type: spearman value: 67.31903429944056 - task: type: STS dataset: name: MTEB STS12 (default) type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 70.45282392047417 - type: cosine_spearman value: 57.66176503826067 - type: euclidean_pearson value: 68.20476513300197 - type: euclidean_spearman value: 57.662984752186595 - type: main_score value: 57.66176503826067 - type: manhattan_pearson value: 68.35595302570229 - type: manhattan_spearman value: 57.78214901099006 - type: pearson value: 70.45282392047417 - type: spearman value: 57.66176503826067 - task: type: STS dataset: name: MTEB STS13 (default) type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 66.72224934737348 - type: cosine_spearman value: 71.89696855506867 - type: euclidean_pearson value: 70.4712630269631 - type: euclidean_spearman value: 71.89698079206684 - type: main_score value: 71.89696855506867 - type: manhattan_pearson value: 70.45860743861545 - type: manhattan_spearman value: 71.91608445555363 - type: pearson value: 66.72224934737348 - type: spearman value: 71.89696855506867 - task: type: STS dataset: name: MTEB STS14 (default) type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 70.34249555730298 - type: cosine_spearman value: 69.53679034910807 - type: euclidean_pearson value: 71.56701694057745 - type: euclidean_spearman value: 69.5367806640627 - type: main_score value: 69.53679034910807 - type: manhattan_pearson value: 71.53194206589868 - type: manhattan_spearman value: 69.52240262783113 - type: pearson value: 70.34249555730298 - type: spearman value: 69.53679034910807 - task: type: STS dataset: name: MTEB STS15 (default) type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 68.33547250158846 - type: cosine_spearman value: 73.96543736110634 - type: euclidean_pearson value: 72.63926797717605 - type: euclidean_spearman value: 73.96543799049243 - type: main_score value: 73.96543736110634 - type: manhattan_pearson value: 72.6308651035737 - type: manhattan_spearman value: 73.99784893840472 - type: pearson value: 68.33547250158846 - type: spearman value: 73.96543736110634 - task: type: STS dataset: name: MTEB STS16 (default) type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 62.50064232309498 - type: cosine_spearman value: 69.99690285087063 - type: euclidean_pearson value: 67.7773080753282 - type: euclidean_spearman value: 69.99717504340504 - type: main_score value: 69.99690285087063 - type: manhattan_pearson value: 67.77737269625732 - type: manhattan_spearman value: 70.05662507231811 - type: pearson value: 62.50064232309498 - type: spearman value: 69.99690285087063 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: -4.639974351143124 - type: cosine_spearman value: -5.70963417137641 - type: euclidean_pearson value: -4.671269689471623 - type: euclidean_spearman value: -5.70963417137641 - type: main_score value: -5.70963417137641 - type: manhattan_pearson value: -4.822356012695697 - type: manhattan_spearman value: -5.805771748799997 - type: pearson value: -4.639974351143124 - type: spearman value: -5.70963417137641 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 75.07706637430398 - type: cosine_spearman value: 78.81834383119009 - type: euclidean_pearson value: 78.33040815719426 - type: euclidean_spearman value: 78.81922098296683 - type: main_score value: 78.81834383119009 - type: manhattan_pearson value: 78.25386282376627 - type: manhattan_spearman value: 78.73096351789457 - type: pearson value: 75.07706637430398 - type: spearman value: 78.81834383119009 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: -8.034513096828757 - type: cosine_spearman value: -8.94071782108332 - type: euclidean_pearson value: -8.362035046748408 - type: euclidean_spearman value: -8.94071782108332 - type: main_score value: -8.94071782108332 - type: manhattan_pearson value: -8.58384659065939 - type: manhattan_spearman value: -9.022478967496742 - type: pearson value: -8.034513096828757 - type: spearman value: -8.94071782108332 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: -9.309746585888194 - type: cosine_spearman value: -9.989532291941243 - type: euclidean_pearson value: -9.113663493693515 - type: euclidean_spearman value: -9.989532291941243 - type: main_score value: -9.989532291941243 - type: manhattan_pearson value: -9.123108445100232 - type: manhattan_spearman value: -10.02555353386953 - type: pearson value: -9.309746585888194 - type: spearman value: -9.989532291941243 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 49.203212653579534 - type: cosine_spearman value: 62.17745071362616 - type: euclidean_pearson value: 60.12172084869311 - type: euclidean_spearman value: 62.17745071362616 - type: main_score value: 62.17745071362616 - type: manhattan_pearson value: 60.03123674358504 - type: manhattan_spearman value: 62.08054980165127 - type: pearson value: 49.203212653579534 - type: spearman value: 62.17745071362616 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: -3.796131822561097 - type: cosine_spearman value: -3.6829417954942962 - type: euclidean_pearson value: -3.9617579449787215 - type: euclidean_spearman value: -3.6829417954942962 - type: main_score value: -3.6829417954942962 - type: manhattan_pearson value: -4.229917664747983 - type: manhattan_spearman value: -3.8304347521413575 - type: pearson value: -3.796131822561097 - type: spearman value: -3.6829417954942962 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 9.70401307418669 - type: cosine_spearman value: 7.125994342518046 - type: euclidean_pearson value: 8.692865519584803 - type: euclidean_spearman value: 7.086314063560257 - type: main_score value: 7.125994342518046 - type: manhattan_pearson value: 8.688214277742162 - type: manhattan_spearman value: 6.951151829297476 - type: pearson value: 9.70401307418669 - type: spearman value: 7.125994342518046 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: -12.59835322441286 - type: cosine_spearman value: -17.99707926594973 - type: euclidean_pearson value: -14.34931127125891 - type: euclidean_spearman value: -17.99707926594973 - type: main_score value: -17.99707926594973 - type: manhattan_pearson value: -14.599702365227513 - type: manhattan_spearman value: -18.256327942493844 - type: pearson value: -12.59835322441286 - type: spearman value: -17.99707926594973 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: -0.06664551245524106 - type: cosine_spearman value: -0.891108084699552 - type: euclidean_pearson value: 0.2657845183657392 - type: euclidean_spearman value: -0.891108084699552 - type: main_score value: -0.891108084699552 - type: manhattan_pearson value: 0.120752189864216 - type: manhattan_spearman value: -0.8531297054534491 - type: pearson value: -0.06664551245524106 - type: spearman value: -0.891108084699552 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 9.587866133715462 - type: cosine_spearman value: 10.240476793789082 - type: euclidean_pearson value: 9.587866133709937 - type: euclidean_spearman value: 10.299853867377841 - type: main_score value: 10.240476793789082 - type: manhattan_pearson value: 9.587479080379996 - type: manhattan_spearman value: 10.289638886132417 - type: pearson value: 9.587866133715462 - type: spearman value: 10.240476793789082 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: -11.455833153778357 - type: cosine_spearman value: -12.120168687487281 - type: euclidean_pearson value: -4.8404233986021 - type: euclidean_spearman value: -5.629445269503656 - type: main_score value: -12.120168687487281 - type: manhattan_pearson value: -5.802510530492165 - type: manhattan_spearman value: -4.129636012427943 - type: pearson value: -11.455833153778357 - type: spearman value: -12.120168687487281 - task: type: STS dataset: name: MTEB STSBenchmark (default) type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 67.09018720017058 - type: cosine_spearman value: 67.6086401236391 - type: euclidean_pearson value: 69.37492911426406 - type: euclidean_spearman value: 67.60865860108962 - type: main_score value: 67.6086401236391 - type: manhattan_pearson value: 69.34659483682688 - type: manhattan_spearman value: 67.592012200863 - type: pearson value: 67.09018720017058 - type: spearman value: 67.6086401236391 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (it) type: mteb/stsb_multi_mt config: it split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 44.27233827248044 - type: cosine_spearman value: 49.47510261384346 - type: euclidean_pearson value: 49.40398312290145 - type: euclidean_spearman value: 49.47500131889738 - type: main_score value: 49.47510261384346 - type: manhattan_pearson value: 49.341548618895466 - type: manhattan_spearman value: 49.4424887001277 - type: pearson value: 44.27233827248044 - type: spearman value: 49.47510261384346 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (nl) type: mteb/stsb_multi_mt config: nl split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 44.79696340221503 - type: cosine_spearman value: 48.84897104878986 - type: euclidean_pearson value: 49.324260285317855 - type: euclidean_spearman value: 48.848924358139364 - type: main_score value: 48.84897104878986 - type: manhattan_pearson value: 49.33647165074528 - type: manhattan_spearman value: 48.88344266774654 - type: pearson value: 44.79696340221503 - type: spearman value: 48.84897104878986 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (en) type: mteb/stsb_multi_mt config: en split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 67.09018713920469 - type: cosine_spearman value: 67.6086401236391 - type: euclidean_pearson value: 69.37492906687476 - type: euclidean_spearman value: 67.60865860108962 - type: main_score value: 67.6086401236391 - type: manhattan_pearson value: 69.34659479129859 - type: manhattan_spearman value: 67.592012200863 - type: pearson value: 67.09018713920469 - type: spearman value: 67.6086401236391 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (es) type: mteb/stsb_multi_mt config: es split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 42.895339590180996 - type: cosine_spearman value: 52.21235147253785 - type: euclidean_pearson value: 49.413874942919264 - type: euclidean_spearman value: 52.21203780406665 - type: main_score value: 52.21235147253785 - type: manhattan_pearson value: 49.276873027104855 - type: manhattan_spearman value: 52.16409604469493 - type: pearson value: 42.895339590180996 - type: spearman value: 52.21235147253785 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (ru) type: mteb/stsb_multi_mt config: ru split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 10.389925450857834 - type: cosine_spearman value: 8.908138291052701 - type: euclidean_pearson value: 9.890367033199064 - type: euclidean_spearman value: 8.770978113601167 - type: main_score value: 8.908138291052701 - type: manhattan_pearson value: 9.899760056143247 - type: manhattan_spearman value: 9.030970134574098 - type: pearson value: 10.389925450857834 - type: spearman value: 8.908138291052701 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (zh) type: mteb/stsb_multi_mt config: zh split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 3.2165863331249414 - type: cosine_spearman value: 0.7975692702633864 - type: euclidean_pearson value: 2.0618436826186066 - type: euclidean_spearman value: 0.5027230247162311 - type: main_score value: 0.7975692702633864 - type: manhattan_pearson value: 2.0514189695530325 - type: manhattan_spearman value: 0.39577079994867403 - type: pearson value: 3.2165863331249414 - type: spearman value: 0.7975692702633864 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (fr) type: mteb/stsb_multi_mt config: fr split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 46.17508747479316 - type: cosine_spearman value: 51.086872268140816 - type: euclidean_pearson value: 51.41891364659744 - type: euclidean_spearman value: 51.08665283035928 - type: main_score value: 51.086872268140816 - type: manhattan_pearson value: 51.361372778247606 - type: manhattan_spearman value: 51.045873818882924 - type: pearson value: 46.17508747479316 - type: spearman value: 51.086872268140816 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (pt) type: mteb/stsb_multi_mt config: pt split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 40.639680830613514 - type: cosine_spearman value: 47.99664145034049 - type: euclidean_pearson value: 46.61505913234052 - type: euclidean_spearman value: 47.99654723025848 - type: main_score value: 47.99664145034049 - type: manhattan_pearson value: 46.594310151466146 - type: manhattan_spearman value: 47.96444879548329 - type: pearson value: 40.639680830613514 - type: spearman value: 47.99664145034049 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (pl) type: mteb/stsb_multi_mt config: pl split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 46.72373117676612 - type: cosine_spearman value: 52.865236864827345 - type: euclidean_pearson value: 52.45181901546032 - type: euclidean_spearman value: 52.86458795625298 - type: main_score value: 52.865236864827345 - type: manhattan_pearson value: 52.44185889658423 - type: manhattan_spearman value: 52.78491169411964 - type: pearson value: 46.72373117676612 - type: spearman value: 52.865236864827345 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (de) type: mteb/stsb_multi_mt config: de split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 48.138397241162444 - type: cosine_spearman value: 51.285304430536335 - type: euclidean_pearson value: 51.803064906612896 - type: euclidean_spearman value: 51.28542208854524 - type: main_score value: 51.285304430536335 - type: manhattan_pearson value: 51.819864335986956 - type: manhattan_spearman value: 51.32840976987932 - type: pearson value: 48.138397241162444 - type: spearman value: 51.285304430536335 - task: type: Reranking dataset: name: MTEB SciDocsRR (default) type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: main_score value: 60.74844680566163 - type: map value: 60.74844680566163 - type: mrr value: 84.68450485607349 - type: nAUC_map_diff1 value: 13.078055417971749 - type: nAUC_map_max value: 47.937301739074215 - type: nAUC_map_std value: 34.26921463872339 - type: nAUC_mrr_diff1 value: 42.90446482292105 - type: nAUC_mrr_max value: 59.75684998106037 - type: nAUC_mrr_std value: 30.107306162191268 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions (default) type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cosine_accuracy value: 99.44851485148514 - type: cosine_accuracy_threshold value: 95.47240059357654 - type: cosine_ap value: 68.22522420879186 - type: cosine_f1 value: 65.92635885447106 - type: cosine_f1_threshold value: 94.98664208777299 - type: cosine_precision value: 79.32489451476793 - type: cosine_recall value: 56.39999999999999 - type: dot_accuracy value: 99.44851485148514 - type: dot_accuracy_threshold value: 95.47240056095825 - type: dot_ap value: 68.22522420879186 - type: dot_f1 value: 65.92635885447106 - type: dot_f1_threshold value: 94.98664205438727 - type: dot_precision value: 79.32489451476793 - type: dot_recall value: 56.39999999999999 - type: euclidean_accuracy value: 99.44851485148514 - type: euclidean_accuracy_threshold value: 30.091857225199625 - type: euclidean_ap value: 68.22522420879186 - type: euclidean_f1 value: 65.92635885447106 - type: euclidean_f1_threshold value: 31.664989847761138 - type: euclidean_precision value: 79.32489451476793 - type: euclidean_recall value: 56.39999999999999 - type: main_score value: 68.28159512609737 - type: manhattan_accuracy value: 99.44851485148514 - type: manhattan_accuracy_threshold value: 1519.5971755477553 - type: manhattan_ap value: 68.28159512609737 - type: manhattan_f1 value: 66.05818596691385 - type: manhattan_f1_threshold value: 1628.6210010065347 - type: manhattan_precision value: 76.89243027888446 - type: manhattan_recall value: 57.9 - type: max_accuracy value: 99.44851485148514 - type: max_ap value: 68.28159512609737 - type: max_f1 value: 66.05818596691385 - type: max_precision value: 79.32489451476793 - type: max_recall value: 57.9 - type: similarity_accuracy value: 99.44851485148514 - type: similarity_accuracy_threshold value: 95.47240059357654 - type: similarity_ap value: 68.22522420879186 - type: similarity_f1 value: 65.92635885447106 - type: similarity_f1_threshold value: 94.98664208777299 - type: similarity_precision value: 79.32489451476793 - type: similarity_recall value: 56.39999999999999 - task: type: Clustering dataset: name: MTEB StackExchangeClustering (default) type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 29.30513928170411 - type: v_measure value: 29.30513928170411 - type: v_measure_std value: 4.167908098359504 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions (default) type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: main_score value: 41.60577705014483 - type: map value: 41.60577705014483 - type: mrr value: 42.046595153212806 - type: nAUC_map_diff1 value: 29.435613304703427 - type: nAUC_map_max value: 23.041089610073772 - type: nAUC_map_std value: 4.187983544965867 - type: nAUC_mrr_diff1 value: 28.24912241668722 - type: nAUC_mrr_max value: 23.844594928925574 - type: nAUC_mrr_std value: 5.300127051350153 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification (default) type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 61.03515625 - type: ap value: 10.357109818250033 - type: ap_weighted value: 10.357109818250033 - type: f1 value: 46.79659702416427 - type: f1_weighted value: 69.34093343990779 - type: main_score value: 61.03515625 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification (default) type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 54.88964346349745 - type: f1 value: 54.88849570146398 - type: f1_weighted value: 54.0202173220827 - type: main_score value: 54.88964346349745 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering (default) type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 25.77793337013197 - type: v_measure value: 25.77793337013197 - type: v_measure_std value: 1.7036625620777253 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 (default) type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cosine_accuracy value: 83.50718245216666 - type: cosine_accuracy_threshold value: 92.85797990005872 - type: cosine_ap value: 64.57501485077721 - type: cosine_f1 value: 61.107669433775236 - type: cosine_f1_threshold value: 90.91770372653797 - type: cosine_precision value: 57.60336370007008 - type: cosine_recall value: 65.06596306068602 - type: dot_accuracy value: 83.50718245216666 - type: dot_accuracy_threshold value: 92.85797986316105 - type: dot_ap value: 64.57501485077721 - type: dot_f1 value: 61.107669433775236 - type: dot_f1_threshold value: 90.91770369108825 - type: dot_precision value: 57.60336370007008 - type: dot_recall value: 65.06596306068602 - type: euclidean_accuracy value: 83.50718245216666 - type: euclidean_accuracy_threshold value: 37.794231852628414 - type: euclidean_ap value: 64.57501485077721 - type: euclidean_f1 value: 61.107669433775236 - type: euclidean_f1_threshold value: 42.61993960299444 - type: euclidean_precision value: 57.60336370007008 - type: euclidean_recall value: 65.06596306068602 - type: main_score value: 64.57501485077721 - type: manhattan_accuracy value: 83.48930082851524 - type: manhattan_accuracy_threshold value: 1897.2244120282544 - type: manhattan_ap value: 64.55099351854031 - type: manhattan_f1 value: 61.062609129458714 - type: manhattan_f1_threshold value: 2160.535839208718 - type: manhattan_precision value: 57.89971617786187 - type: manhattan_recall value: 64.5910290237467 - type: max_accuracy value: 83.50718245216666 - type: max_ap value: 64.57501485077721 - type: max_f1 value: 61.107669433775236 - type: max_precision value: 57.89971617786187 - type: max_recall value: 65.06596306068602 - type: similarity_accuracy value: 83.50718245216666 - type: similarity_accuracy_threshold value: 92.85797990005872 - type: similarity_ap value: 64.57501485077721 - type: similarity_f1 value: 61.107669433775236 - type: similarity_f1_threshold value: 90.91770372653797 - type: similarity_precision value: 57.60336370007008 - type: similarity_recall value: 65.06596306068602 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus (default) type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cosine_accuracy value: 86.35463965537315 - type: cosine_accuracy_threshold value: 93.93182168113243 - type: cosine_ap value: 79.17988590079685 - type: cosine_f1 value: 71.77413258749716 - type: cosine_f1_threshold value: 92.7978491290961 - type: cosine_precision value: 70.48997772828508 - type: cosine_recall value: 73.10594394825993 - type: dot_accuracy value: 86.35463965537315 - type: dot_accuracy_threshold value: 93.9318216501234 - type: dot_ap value: 79.17988590079685 - type: dot_f1 value: 71.77413258749716 - type: dot_f1_threshold value: 92.79784909821515 - type: dot_precision value: 70.48997772828508 - type: dot_recall value: 73.10594394825993 - type: euclidean_accuracy value: 86.35463965537315 - type: euclidean_accuracy_threshold value: 34.837274051981524 - type: euclidean_ap value: 79.17988575609482 - type: euclidean_f1 value: 71.77413258749716 - type: euclidean_f1_threshold value: 37.95299953339363 - type: euclidean_precision value: 70.48997772828508 - type: euclidean_recall value: 73.10594394825993 - type: main_score value: 79.17988590079685 - type: manhattan_accuracy value: 86.36046105483757 - type: manhattan_accuracy_threshold value: 1771.5702122947137 - type: manhattan_ap value: 79.16559289648251 - type: manhattan_f1 value: 71.8502354427472 - type: manhattan_f1_threshold value: 1912.7281549009595 - type: manhattan_precision value: 71.45359019264448 - type: manhattan_recall value: 72.25130890052355 - type: max_accuracy value: 86.36046105483757 - type: max_ap value: 79.17988590079685 - type: max_f1 value: 71.8502354427472 - type: max_precision value: 71.45359019264448 - type: max_recall value: 73.10594394825993 - type: similarity_accuracy value: 86.35463965537315 - type: similarity_accuracy_threshold value: 93.93182168113243 - type: similarity_ap value: 79.17988590079685 - type: similarity_f1 value: 71.77413258749716 - type: similarity_f1_threshold value: 92.7978491290961 - type: similarity_precision value: 70.48997772828508 - type: similarity_recall value: 73.10594394825993 ---
[ "BIOSSES" ]
Non_BioNLP
{"tags": ["mteb"], "model-index": [{"name": "no_model_name_available", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en-ext)", "type": "mteb/amazon_counterfactual", "config": "en-ext", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 68.60569715142428}, {"type": "ap", "value": 19.05710055685074}, {"type": "ap_weighted", "value": 19.05710055685074}, {"type": "f1", "value": 56.581673345537695}, {"type": "f1_weighted", "value": 74.61143344921274}, {"type": "main_score", "value": 68.60569715142428}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 68.56716417910447}, {"type": "ap", "value": 31.32344301280815}, {"type": "ap_weighted", "value": 31.32344301280815}, {"type": "f1", "value": 62.570662383384025}, {"type": "f1_weighted", "value": 71.61789541976941}, {"type": "main_score", "value": 68.56716417910447}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (de)", "type": "mteb/amazon_counterfactual", "config": "de", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 63.276231263383295}, {"type": "ap", "value": 77.029702826753}, {"type": "ap_weighted", "value": 77.029702826753}, {"type": "f1", "value": 61.38234936043525}, {"type": "f1_weighted", "value": 64.54688276108833}, {"type": "main_score", "value": 63.276231263383295}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (ja)", "type": "mteb/amazon_counterfactual", "config": "ja", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 44.368308351177724}, {"type": "ap", "value": 10.954835146791183}, {"type": "ap_weighted", "value": 10.954835146791183}, {"type": "f1", "value": 36.62906436161906}, {"type": "f1_weighted", "value": 51.69895802800691}, {"type": "main_score", "value": 44.368308351177724}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 36.808}, {"type": "f1", "value": 34.68301166695203}, {"type": "f1_weighted", "value": 34.68301166695202}, {"type": "main_score", "value": 36.808}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (de)", "type": "mteb/amazon_reviews_multi", "config": "de", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 27.057999999999993}, {"type": "f1", "value": 26.24275950859653}, {"type": "f1_weighted", "value": 26.242759508596524}, {"type": "main_score", "value": 27.057999999999993}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (es)", "type": "mteb/amazon_reviews_multi", "config": "es", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 31.064000000000004}, {"type": "f1", "value": 29.708079352003708}, {"type": "f1_weighted", "value": 29.7080793520037}, {"type": "main_score", "value": 31.064000000000004}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 29.43}, {"type": "f1", "value": 27.94855548400926}, {"type": "f1_weighted", "value": 27.94855548400926}, {"type": "main_score", "value": 29.43}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (ja)", "type": "mteb/amazon_reviews_multi", "config": "ja", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 20.787999999999997}, {"type": "f1", "value": 15.135022040282188}, {"type": "f1_weighted", "value": 15.135022040282188}, {"type": "main_score", "value": 20.787999999999997}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (zh)", "type": "mteb/amazon_reviews_multi", "config": "zh", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 21.914}, {"type": "f1", "value": 15.895956878609303}, {"type": "f1_weighted", "value": 15.895956878609303}, {"type": "main_score", "value": 21.914}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S (default)", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "main_score", "value": 19.890899955689118}, {"type": "v_measure", "value": 19.890899955689118}, {"type": "v_measure_std", "value": 15.234197799081727}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions (default)", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "main_score", "value": 49.123206371254746}, {"type": "map", "value": 49.123206371254746}, {"type": "mrr", "value": 62.31862551114629}, {"type": "nAUC_map_diff1", "value": 10.382490924755208}, {"type": "nAUC_map_max", "value": 18.748869416562293}, {"type": "nAUC_map_std", "value": 2.5774869725944383}, {"type": "nAUC_mrr_diff1", "value": 13.422210021656673}, {"type": "nAUC_mrr_max", "value": 24.878571083763035}, {"type": "nAUC_mrr_std", "value": -0.41050314967328677}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES (default)", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cosine_pearson", "value": 54.66661709953381}, {"type": "cosine_spearman", "value": 61.90442258245585}, {"type": "euclidean_pearson", "value": 57.802209299685984}, {"type": "euclidean_spearman", "value": 61.90442258245585}, {"type": "main_score", "value": 61.90442258245585}, {"type": "manhattan_pearson", "value": 58.05739954223122}, {"type": "manhattan_spearman", "value": 62.10683683315609}, {"type": "pearson", "value": 54.66661709953381}, {"type": "spearman", "value": 61.90442258245585}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification (default)", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 50.75324675324676}, {"type": "f1", "value": 50.08833636657759}, {"type": "f1_weighted", "value": 50.08833636657759}, {"type": "main_score", "value": 50.75324675324676}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S (default)", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "main_score", "value": 19.543768231624547}, {"type": "v_measure", "value": 19.543768231624547}, {"type": "v_measure_std", "value": 0.8448669358199523}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification (default)", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 31.465}, {"type": "f1", "value": 27.518410158786278}, {"type": "f1_weighted", "value": 32.729446691751605}, {"type": "main_score", "value": 31.465}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 83.66393068855447}, {"type": "f1", "value": 83.02273407562654}, {"type": "f1_weighted", "value": 83.66877159114159}, {"type": "main_score", "value": 83.66393068855447}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (de)", "type": "mteb/mtop_domain", "config": "de", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 63.97013243167089}, {"type": "f1", "value": 60.85033241575268}, {"type": "f1_weighted", "value": 63.82115556806192}, {"type": "main_score", "value": 63.97013243167089}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (es)", "type": "mteb/mtop_domain", "config": "es", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 62.37491661107405}, {"type": "f1", "value": 60.94290925815502}, {"type": "f1_weighted", "value": 62.10717598146462}, {"type": "main_score", "value": 62.37491661107405}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 62.95020357031006}, {"type": "f1", "value": 60.758971765144224}, {"type": "f1_weighted", "value": 63.42247920372272}, {"type": "main_score", "value": 62.95020357031006}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (hi)", "type": "mteb/mtop_domain", "config": "hi", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 12.613840086052347}, {"type": "f1", "value": 6.5750442135283}, {"type": "f1_weighted", "value": 6.53244904380679}, {"type": "main_score", "value": 12.613840086052347}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (th)", "type": "mteb/mtop_domain", "config": "th", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 14.759493670886076}, {"type": "f1", "value": 8.12843236923924}, {"type": "f1_weighted", "value": 8.793246140296032}, {"type": "main_score", "value": 14.759493670886076}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 49.43228454172367}, {"type": "f1", "value": 34.55112542095168}, {"type": "f1_weighted", "value": 52.614378484454974}, {"type": "main_score", "value": 49.43228454172367}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (de)", "type": "mteb/mtop_intent", "config": "de", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 39.01662440123979}, {"type": "f1", "value": 23.82791663064076}, {"type": "f1_weighted", "value": 43.645398141967966}, {"type": "main_score", "value": 39.01662440123979}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (es)", "type": "mteb/mtop_intent", "config": "es", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 37.11140760507005}, {"type": "f1", "value": 21.935352507756388}, {"type": "f1_weighted", "value": 39.321275372065685}, {"type": "main_score", "value": 37.11140760507005}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 33.7770122142186}, {"type": "f1", "value": 22.220964590376273}, {"type": "f1_weighted", "value": 37.485286173160986}, {"type": "main_score", "value": 33.7770122142186}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (hi)", "type": "mteb/mtop_intent", "config": "hi", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 5.453567586948727}, {"type": "f1", "value": 0.7075326300577311}, {"type": "f1_weighted", "value": 2.3858630958577836}, {"type": "main_score", "value": 5.453567586948727}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (th)", "type": "mteb/mtop_intent", "config": "th", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 5.529837251356239}, {"type": "f1", "value": 1.2115090491792773}, {"type": "f1_weighted", "value": 3.498070456864493}, {"type": "main_score", "value": 5.529837251356239}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MasakhaNEWSClassification (eng)", "type": "mteb/masakhanews", "config": "eng", "split": "test", "revision": "18193f187b92da67168c655c9973a165ed9593dd"}, "metrics": [{"type": "accuracy", "value": 64.5042194092827}, {"type": "f1", "value": 62.368592308141814}, {"type": "f1_weighted", "value": 63.90417453510408}, {"type": "main_score", "value": 64.5042194092827}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MasakhaNEWSClusteringS2S (eng)", "type": "masakhane/masakhanews", "config": "eng", "split": "test", "revision": "8ccc72e69e65f40c70e117d8b3c08306bb788b60"}, "metrics": [{"type": "main_score", "value": 24.84564500417387}, {"type": "v_measure", "value": 24.84564500417387}, {"type": "v_measure_std", "value": 22.286703004465615}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ta)", "type": "mteb/amazon_massive_intent", "config": "ta", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 2.219233355749832}, {"type": "f1", "value": 0.1932870095686131}, {"type": "f1_weighted", "value": 0.251235487639337}, {"type": "main_score", "value": 2.219233355749832}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ml)", "type": "mteb/amazon_massive_intent", "config": "ml", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 1.2844653665097512}, {"type": "f1", "value": 0.18710410412943543}, {"type": "f1_weighted", "value": 0.2739907174462001}, {"type": "main_score", "value": 1.2844653665097512}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (af)", "type": "mteb/amazon_massive_intent", "config": "af", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 32.982515131136516}, {"type": "f1", "value": 29.879476335364973}, {"type": "f1_weighted", "value": 32.59262194412672}, {"type": "main_score", "value": 32.982515131136516}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (bn)", "type": "mteb/amazon_massive_intent", "config": "bn", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 2.2125084061869535}, {"type": "f1", "value": 0.5736320148349802}, {"type": "f1_weighted", "value": 0.7371018417507617}, {"type": "main_score", "value": 2.2125084061869535}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (is)", "type": "mteb/amazon_massive_intent", "config": "is", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 27.165433759246802}, {"type": "f1", "value": 25.68362075943369}, {"type": "f1_weighted", "value": 25.71202157696122}, {"type": "main_score", "value": 27.165433759246802}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (el)", "type": "mteb/amazon_massive_intent", "config": "el", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 10.665770006724948}, {"type": "f1", "value": 5.114611283180833}, {"type": "f1_weighted", "value": 7.526848175428076}, {"type": "main_score", "value": 10.665770006724948}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sw)", "type": "mteb/amazon_massive_intent", "config": "sw", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 31.661062542030933}, {"type": "f1", "value": 31.298953203005986}, {"type": "f1_weighted", "value": 30.183076634560134}, {"type": "main_score", "value": 31.661062542030933}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (cy)", "type": "mteb/amazon_massive_intent", "config": "cy", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 27.995965030262276}, {"type": "f1", "value": 25.849404737727465}, {"type": "f1_weighted", "value": 26.922571545761638}, {"type": "main_score", "value": 27.995965030262276}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pt)", "type": "mteb/amazon_massive_intent", "config": "pt", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 36.73839946200404}, {"type": "f1", "value": 35.6799981256784}, {"type": "f1_weighted", "value": 35.65583276626004}, {"type": "main_score", "value": 36.73839946200404}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fa)", "type": "mteb/amazon_massive_intent", "config": "fa", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 1.1062542030934768}, {"type": "f1", "value": 0.3829753109058956}, {"type": "f1_weighted", "value": 0.42459533841173747}, {"type": "main_score", "value": 1.1062542030934768}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (mn)", "type": "mteb/amazon_massive_intent", "config": "mn", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 2.3604572965702753}, {"type": "f1", "value": 0.9096234324517042}, {"type": "f1_weighted", "value": 0.9394595549389105}, {"type": "main_score", "value": 2.3604572965702753}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (de)", "type": "mteb/amazon_massive_intent", "config": "de", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 32.68997982515132}, {"type": "f1", "value": 29.986572248952147}, {"type": "f1_weighted", "value": 32.22231191644284}, {"type": "main_score", "value": 32.68997982515132}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sl)", "type": "mteb/amazon_massive_intent", "config": "sl", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 36.70477471418964}, {"type": "f1", "value": 33.50288534893127}, {"type": "f1_weighted", "value": 34.846130335010265}, {"type": "main_score", "value": 36.70477471418964}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (km)", "type": "mteb/amazon_massive_intent", "config": "km", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 2.96906523201076}, {"type": "f1", "value": 0.7797856721437596}, {"type": "f1_weighted", "value": 0.6236996914225641}, {"type": "main_score", "value": 2.96906523201076}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (az)", "type": "mteb/amazon_massive_intent", "config": "az", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 31.01882985877606}, {"type": "f1", "value": 29.527835951539323}, {"type": "f1_weighted", "value": 30.66568514409952}, {"type": "main_score", "value": 31.01882985877606}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (my)", "type": "mteb/amazon_massive_intent", "config": "my", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 3.2178883658372555}, {"type": "f1", "value": 0.5240681583697773}, {"type": "f1_weighted", "value": 0.9198214868347652}, {"type": "main_score", "value": 3.2178883658372555}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (it)", "type": "mteb/amazon_massive_intent", "config": "it", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 37.11499663752522}, {"type": "f1", "value": 36.36396173693096}, {"type": "f1_weighted", "value": 35.50337761684995}, {"type": "main_score", "value": 37.11499663752522}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sq)", "type": "mteb/amazon_massive_intent", "config": "sq", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 26.7350369872226}, {"type": "f1", "value": 25.812896452146234}, {"type": "f1_weighted", "value": 26.2226872478251}, {"type": "main_score", "value": 26.7350369872226}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (da)", "type": "mteb/amazon_massive_intent", "config": "da", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 34.97982515131137}, {"type": "f1", "value": 32.92316320729933}, {"type": "f1_weighted", "value": 33.68424734170567}, {"type": "main_score", "value": 34.97982515131137}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ka)", "type": "mteb/amazon_massive_intent", "config": "ka", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 1.546738399462004}, {"type": "f1", "value": 0.6491922803798055}, {"type": "f1_weighted", "value": 0.36416059882684426}, {"type": "main_score", "value": 1.546738399462004}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hu)", "type": "mteb/amazon_massive_intent", "config": "hu", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 25.16476126429052}, {"type": "f1", "value": 23.67218773633549}, {"type": "f1_weighted", "value": 23.6371559019449}, {"type": "main_score", "value": 25.16476126429052}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ms)", "type": "mteb/amazon_massive_intent", "config": "ms", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 33.79959650302623}, {"type": "f1", "value": 32.51301308582213}, {"type": "f1_weighted", "value": 32.526479564865305}, {"type": "main_score", "value": 33.79959650302623}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tl)", "type": "mteb/amazon_massive_intent", "config": "tl", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 29.49226630800269}, {"type": "f1", "value": 28.94940260858102}, {"type": "f1_weighted", "value": 28.63948113059682}, {"type": "main_score", "value": 29.49226630800269}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (th)", "type": "mteb/amazon_massive_intent", "config": "th", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 1.6778749159381305}, {"type": "f1", "value": 0.9744693901937154}, {"type": "f1_weighted", "value": 0.691053805319416}, {"type": "main_score", "value": 1.6778749159381305}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fi)", "type": "mteb/amazon_massive_intent", "config": "fi", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 30.114324142568925}, {"type": "f1", "value": 29.430743039242152}, {"type": "f1_weighted", "value": 29.04299307313548}, {"type": "main_score", "value": 30.114324142568925}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hi)", "type": "mteb/amazon_massive_intent", "config": "hi", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 2.797579018157364}, {"type": "f1", "value": 1.144033688398988}, {"type": "f1_weighted", "value": 1.0884768126381035}, {"type": "main_score", "value": 2.797579018157364}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (lv)", "type": "mteb/amazon_massive_intent", "config": "lv", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 32.54539340954942}, {"type": "f1", "value": 31.521139537198316}, {"type": "f1_weighted", "value": 31.530360085026093}, {"type": "main_score", "value": 32.54539340954942}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sv)", "type": "mteb/amazon_massive_intent", "config": "sv", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 30.783456624075324}, {"type": "f1", "value": 29.604725003907866}, {"type": "f1_weighted", "value": 29.685617024715732}, {"type": "main_score", "value": 30.783456624075324}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (am)", "type": "mteb/amazon_massive_intent", "config": "am", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 1.8426361802286482}, {"type": "f1", "value": 0.33542666799543247}, {"type": "f1_weighted", "value": 0.2711276986927232}, {"type": "main_score", "value": 1.8426361802286482}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (jv)", "type": "mteb/amazon_massive_intent", "config": "jv", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 30.178211163416268}, {"type": "f1", "value": 29.37132431463145}, {"type": "f1_weighted", "value": 29.494452777308833}, {"type": "main_score", "value": 30.178211163416268}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ru)", "type": "mteb/amazon_massive_intent", "config": "ru", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 2.649630127774042}, {"type": "f1", "value": 1.7505098874789995}, {"type": "f1_weighted", "value": 1.4639682364635813}, {"type": "main_score", "value": 2.649630127774042}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-TW)", "type": "mteb/amazon_massive_intent", "config": "zh-TW", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 4.468728984532616}, {"type": "f1", "value": 2.090461109042727}, {"type": "f1_weighted", "value": 2.7853674561791295}, {"type": "main_score", "value": 4.468728984532616}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fr)", "type": "mteb/amazon_massive_intent", "config": "fr", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 33.27168796234029}, {"type": "f1", "value": 32.00481372908824}, {"type": "f1_weighted", "value": 32.159041657111764}, {"type": "main_score", "value": 33.27168796234029}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-CN)", "type": "mteb/amazon_massive_intent", "config": "zh-CN", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 0.749831876260928}, {"type": "f1", "value": 0.11432947296104061}, {"type": "f1_weighted", "value": 0.0764038848837725}, {"type": "main_score", "value": 0.749831876260928}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nb)", "type": "mteb/amazon_massive_intent", "config": "nb", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 32.125084061869536}, {"type": "f1", "value": 30.154247947358247}, {"type": "f1_weighted", "value": 30.87288096360447}, {"type": "main_score", "value": 32.125084061869536}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (kn)", "type": "mteb/amazon_massive_intent", "config": "kn", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 1.617350369872226}, {"type": "f1", "value": 0.9905489260231543}, {"type": "f1_weighted", "value": 0.7953294182207199}, {"type": "main_score", "value": 1.617350369872226}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ja)", "type": "mteb/amazon_massive_intent", "config": "ja", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 3.806321452589106}, {"type": "f1", "value": 1.9196646149428953}, {"type": "f1_weighted", "value": 1.6645242984042585}, {"type": "main_score", "value": 3.806321452589106}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nl)", "type": "mteb/amazon_massive_intent", "config": "nl", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 35.77673167451245}, {"type": "f1", "value": 33.18041618186975}, {"type": "f1_weighted", "value": 35.833046113268786}, {"type": "main_score", "value": 35.77673167451245}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 53.4969737726967}, {"type": "f1", "value": 51.88341293441036}, {"type": "f1_weighted", "value": 53.20514357568628}, {"type": "main_score", "value": 53.4969737726967}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ar)", "type": "mteb/amazon_massive_intent", "config": "ar", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 4.784801613987895}, {"type": "f1", "value": 1.969274839533907}, {"type": "f1_weighted", "value": 2.4942212470758016}, {"type": "main_score", "value": 4.784801613987895}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (es)", "type": "mteb/amazon_massive_intent", "config": "es", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 31.069266980497645}, {"type": "f1", "value": 31.48265427665997}, {"type": "f1_weighted", "value": 30.3696521492686}, {"type": "main_score", "value": 31.069266980497645}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (he)", "type": "mteb/amazon_massive_intent", "config": "he", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 1.9670477471418968}, {"type": "f1", "value": 0.45697365831527426}, {"type": "f1_weighted", "value": 0.2853963696007572}, {"type": "main_score", "value": 1.9670477471418968}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (te)", "type": "mteb/amazon_massive_intent", "config": "te", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 2.1015467383994615}, {"type": "f1", "value": 0.5210481229705188}, {"type": "f1_weighted", "value": 0.5924944385210995}, {"type": "main_score", "value": 2.1015467383994615}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tr)", "type": "mteb/amazon_massive_intent", "config": "tr", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 31.318090114324143}, {"type": "f1", "value": 30.05810538658039}, {"type": "f1_weighted", "value": 30.360376696442504}, {"type": "main_score", "value": 31.318090114324143}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (vi)", "type": "mteb/amazon_massive_intent", "config": "vi", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 19.078681909885677}, {"type": "f1", "value": 18.360818504390085}, {"type": "f1_weighted", "value": 18.15470646878023}, {"type": "main_score", "value": 19.078681909885677}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (id)", "type": "mteb/amazon_massive_intent", "config": "id", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 35.564895763281775}, {"type": "f1", "value": 35.587064959631185}, {"type": "f1_weighted", "value": 34.4349962874478}, {"type": "main_score", "value": 35.564895763281775}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ko)", "type": "mteb/amazon_massive_intent", "config": "ko", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 3.2111634162743776}, {"type": "f1", "value": 1.4524341197394974}, {"type": "f1_weighted", "value": 1.3395307357797508}, {"type": "main_score", "value": 3.2111634162743776}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ro)", "type": "mteb/amazon_massive_intent", "config": "ro", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 33.99798251513114}, {"type": "f1", "value": 32.69281167233965}, {"type": "f1_weighted", "value": 32.22827641327085}, {"type": "main_score", "value": 33.99798251513114}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pl)", "type": "mteb/amazon_massive_intent", "config": "pl", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 29.660390047074646}, {"type": "f1", "value": 28.090771859451536}, {"type": "f1_weighted", "value": 29.50058846849659}, {"type": "main_score", "value": 29.660390047074646}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ur)", "type": "mteb/amazon_massive_intent", "config": "ur", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 2.118359112306658}, {"type": "f1", "value": 1.0794128790274702}, {"type": "f1_weighted", "value": 1.0149237288074577}, {"type": "main_score", "value": 2.118359112306658}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hy)", "type": "mteb/amazon_massive_intent", "config": "hy", "split": "test", "revision": "4672e20407010da34463acc759c162ca9734bca6"}, "metrics": [{"type": "accuracy", "value": 2.242770679219906}, {"type": "f1", "value": 0.6772746623940161}, {"type": "f1_weighted", "value": 0.5935033259869644}, {"type": "main_score", "value": 2.242770679219906}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ta)", "type": "mteb/amazon_massive_scenario", "config": "ta", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 4.7679892400807}, {"type": "f1", "value": 0.6958635242707644}, {"type": "f1_weighted", "value": 0.7383116540131966}, {"type": "main_score", "value": 4.7679892400807}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ml)", "type": "mteb/amazon_massive_scenario", "config": "ml", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 4.599865501008742}, {"type": "f1", "value": 0.8680195452904774}, {"type": "f1_weighted", "value": 1.3022709162006496}, {"type": "main_score", "value": 4.599865501008742}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (af)", "type": "mteb/amazon_massive_scenario", "config": "af", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 45.80026899798251}, {"type": "f1", "value": 42.09162084904855}, {"type": "f1_weighted", "value": 45.937899984554896}, {"type": "main_score", "value": 45.80026899798251}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (bn)", "type": "mteb/amazon_massive_scenario", "config": "bn", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 7.935440484196368}, {"type": "f1", "value": 2.054473625082069}, {"type": "f1_weighted", "value": 2.331310360179839}, {"type": "main_score", "value": 7.935440484196368}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (is)", "type": "mteb/amazon_massive_scenario", "config": "is", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 39.525891055817084}, {"type": "f1", "value": 35.64315129468117}, {"type": "f1_weighted", "value": 38.873288696604064}, {"type": "main_score", "value": 39.525891055817084}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (el)", "type": "mteb/amazon_massive_scenario", "config": "el", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 16.822461331540016}, {"type": "f1", "value": 9.528868617590787}, {"type": "f1_weighted", "value": 12.052833175443745}, {"type": "main_score", "value": 16.822461331540016}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sw)", "type": "mteb/amazon_massive_scenario", "config": "sw", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 41.44922663080027}, {"type": "f1", "value": 38.29694592816531}, {"type": "f1_weighted", "value": 40.494682049238065}, {"type": "main_score", "value": 41.44922663080027}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (cy)", "type": "mteb/amazon_massive_scenario", "config": "cy", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 36.37525218560861}, {"type": "f1", "value": 32.742079476295714}, {"type": "f1_weighted", "value": 36.41453434396975}, {"type": "main_score", "value": 36.37525218560861}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pt)", "type": "mteb/amazon_massive_scenario", "config": "pt", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 43.79959650302623}, {"type": "f1", "value": 41.74604131799107}, {"type": "f1_weighted", "value": 41.89697637112924}, {"type": "main_score", "value": 43.79959650302623}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fa)", "type": "mteb/amazon_massive_scenario", "config": "fa", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 6.2844653665097505}, {"type": "f1", "value": 1.1363404526147562}, {"type": "f1_weighted", "value": 1.507290141564863}, {"type": "main_score", "value": 6.2844653665097505}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (mn)", "type": "mteb/amazon_massive_scenario", "config": "mn", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 5.406859448554135}, {"type": "f1", "value": 2.560817113707556}, {"type": "f1_weighted", "value": 2.408341973383642}, {"type": "main_score", "value": 5.406859448554135}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (de)", "type": "mteb/amazon_massive_scenario", "config": "de", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 43.08002689979825}, {"type": "f1", "value": 39.31491179400749}, {"type": "f1_weighted", "value": 42.387701010649735}, {"type": "main_score", "value": 43.08002689979825}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sl)", "type": "mteb/amazon_massive_scenario", "config": "sl", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 46.30127774041695}, {"type": "f1", "value": 43.177548916667774}, {"type": "f1_weighted", "value": 46.02641155529322}, {"type": "main_score", "value": 46.30127774041695}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (km)", "type": "mteb/amazon_massive_scenario", "config": "km", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 5.968392737054471}, {"type": "f1", "value": 1.558644350101979}, {"type": "f1_weighted", "value": 2.184277748991485}, {"type": "main_score", "value": 5.968392737054471}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (az)", "type": "mteb/amazon_massive_scenario", "config": "az", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 39.08204438466712}, {"type": "f1", "value": 37.19465931596499}, {"type": "f1_weighted", "value": 37.92508333682256}, {"type": "main_score", "value": 39.08204438466712}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (my)", "type": "mteb/amazon_massive_scenario", "config": "my", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 5.712844653665098}, {"type": "f1", "value": 2.3513952725160445}, {"type": "f1_weighted", "value": 2.591355133449796}, {"type": "main_score", "value": 5.712844653665098}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (it)", "type": "mteb/amazon_massive_scenario", "config": "it", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 44.79488903833221}, {"type": "f1", "value": 42.216456011086514}, {"type": "f1_weighted", "value": 43.63836497077992}, {"type": "main_score", "value": 44.79488903833221}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sq)", "type": "mteb/amazon_massive_scenario", "config": "sq", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 38.91055817081372}, {"type": "f1", "value": 36.658118919837705}, {"type": "f1_weighted", "value": 38.285047658406185}, {"type": "main_score", "value": 38.91055817081372}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (da)", "type": "mteb/amazon_massive_scenario", "config": "da", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 42.82447881640888}, {"type": "f1", "value": 39.71183576580626}, {"type": "f1_weighted", "value": 42.99955794883917}, {"type": "main_score", "value": 42.82447881640888}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ka)", "type": "mteb/amazon_massive_scenario", "config": "ka", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 6.9569603227975785}, {"type": "f1", "value": 1.3249507928345723}, {"type": "f1_weighted", "value": 2.1526435195273512}, {"type": "main_score", "value": 6.9569603227975785}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hu)", "type": "mteb/amazon_massive_scenario", "config": "hu", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 35.47747141896436}, {"type": "f1", "value": 32.68368628376791}, {"type": "f1_weighted", "value": 34.486227854192805}, {"type": "main_score", "value": 35.47747141896436}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ms)", "type": "mteb/amazon_massive_scenario", "config": "ms", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 44.20645595158036}, {"type": "f1", "value": 40.46275245484104}, {"type": "f1_weighted", "value": 43.07451372640555}, {"type": "main_score", "value": 44.20645595158036}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tl)", "type": "mteb/amazon_massive_scenario", "config": "tl", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 37.565568258238066}, {"type": "f1", "value": 34.34228491467635}, {"type": "f1_weighted", "value": 36.715470304700304}, {"type": "main_score", "value": 37.565568258238066}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (th)", "type": "mteb/amazon_massive_scenario", "config": "th", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 4.428379287155346}, {"type": "f1", "value": 2.118733356397359}, {"type": "f1_weighted", "value": 1.6597464958411214}, {"type": "main_score", "value": 4.428379287155346}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fi)", "type": "mteb/amazon_massive_scenario", "config": "fi", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 34.67720242098184}, {"type": "f1", "value": 31.648714845929625}, {"type": "f1_weighted", "value": 34.62782835061803}, {"type": "main_score", "value": 34.67720242098184}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hi)", "type": "mteb/amazon_massive_scenario", "config": "hi", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 8.006052454606591}, {"type": "f1", "value": 2.1079480174137237}, {"type": "f1_weighted", "value": 2.1631918405037758}, {"type": "main_score", "value": 8.006052454606591}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (lv)", "type": "mteb/amazon_massive_scenario", "config": "lv", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 39.22999327505043}, {"type": "f1", "value": 37.16721131021293}, {"type": "f1_weighted", "value": 39.397613949853735}, {"type": "main_score", "value": 39.22999327505043}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sv)", "type": "mteb/amazon_massive_scenario", "config": "sv", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 41.55010087424344}, {"type": "f1", "value": 38.32223910141539}, {"type": "f1_weighted", "value": 41.72498846160742}, {"type": "main_score", "value": 41.55010087424344}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (am)", "type": "mteb/amazon_massive_scenario", "config": "am", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 3.0363147276395432}, {"type": "f1", "value": 0.4951111891349476}, {"type": "f1_weighted", "value": 0.4456347917226148}, {"type": "main_score", "value": 3.0363147276395432}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (jv)", "type": "mteb/amazon_massive_scenario", "config": "jv", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 42.84801613987895}, {"type": "f1", "value": 40.77209890733345}, {"type": "f1_weighted", "value": 42.29511181907119}, {"type": "main_score", "value": 42.84801613987895}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ru)", "type": "mteb/amazon_massive_scenario", "config": "ru", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 8.140551445864155}, {"type": "f1", "value": 3.088889182397252}, {"type": "f1_weighted", "value": 3.382529160821981}, {"type": "main_score", "value": 8.140551445864155}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-TW)", "type": "mteb/amazon_massive_scenario", "config": "zh-TW", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 10.063887020847343}, {"type": "f1", "value": 4.3953906298120415}, {"type": "f1_weighted", "value": 6.1030360630370675}, {"type": "main_score", "value": 10.063887020847343}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fr)", "type": "mteb/amazon_massive_scenario", "config": "fr", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 40.86079354404843}, {"type": "f1", "value": 38.12848430733589}, {"type": "f1_weighted", "value": 39.61399818207077}, {"type": "main_score", "value": 40.86079354404843}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-CN)", "type": "mteb/amazon_massive_scenario", "config": "zh-CN", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 3.1809011432414254}, {"type": "f1", "value": 0.6663078501713696}, {"type": "f1_weighted", "value": 0.6161504543566888}, {"type": "main_score", "value": 3.1809011432414254}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nb)", "type": "mteb/amazon_massive_scenario", "config": "nb", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 38.991257565568255}, {"type": "f1", "value": 35.8711142606479}, {"type": "f1_weighted", "value": 39.27058914996822}, {"type": "main_score", "value": 38.991257565568255}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (kn)", "type": "mteb/amazon_massive_scenario", "config": "kn", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 7.5117686617350365}, {"type": "f1", "value": 2.730333236177}, {"type": "f1_weighted", "value": 2.476626926704587}, {"type": "main_score", "value": 7.5117686617350365}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ja)", "type": "mteb/amazon_massive_scenario", "config": "ja", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 8.32548755884331}, {"type": "f1", "value": 3.0996007067176996}, {"type": "f1_weighted", "value": 3.0676442629069967}, {"type": "main_score", "value": 8.32548755884331}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nl)", "type": "mteb/amazon_massive_scenario", "config": "nl", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 47.57901815736382}, {"type": "f1", "value": 43.47365742357309}, {"type": "f1_weighted", "value": 47.581511497169764}, {"type": "main_score", "value": 47.57901815736382}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 63.84330867518494}, {"type": "f1", "value": 61.80623184800081}, {"type": "f1_weighted", "value": 63.66823920852459}, {"type": "main_score", "value": 63.84330867518494}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ar)", "type": "mteb/amazon_massive_scenario", "config": "ar", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 10.060524546065905}, {"type": "f1", "value": 4.697788726183898}, {"type": "f1_weighted", "value": 8.0688374518688}, {"type": "main_score", "value": 10.060524546065905}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (es)", "type": "mteb/amazon_massive_scenario", "config": "es", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 39.02824478816409}, {"type": "f1", "value": 37.25613303442762}, {"type": "f1_weighted", "value": 38.22861284484312}, {"type": "main_score", "value": 39.02824478816409}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (he)", "type": "mteb/amazon_massive_scenario", "config": "he", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 5.0638870208473445}, {"type": "f1", "value": 1.0753261358276471}, {"type": "f1_weighted", "value": 1.0802883978030118}, {"type": "main_score", "value": 5.0638870208473445}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (te)", "type": "mteb/amazon_massive_scenario", "config": "te", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 6.321452589105584}, {"type": "f1", "value": 1.5829376262790664}, {"type": "f1_weighted", "value": 2.232184358298365}, {"type": "main_score", "value": 6.321452589105584}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tr)", "type": "mteb/amazon_massive_scenario", "config": "tr", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 37.21923335574983}, {"type": "f1", "value": 36.993268170979576}, {"type": "f1_weighted", "value": 35.67645464322424}, {"type": "main_score", "value": 37.21923335574983}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (vi)", "type": "mteb/amazon_massive_scenario", "config": "vi", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 25.934767989240076}, {"type": "f1", "value": 24.616943306685748}, {"type": "f1_weighted", "value": 24.74309285569417}, {"type": "main_score", "value": 25.934767989240076}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (id)", "type": "mteb/amazon_massive_scenario", "config": "id", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 44.69401479488904}, {"type": "f1", "value": 42.41464498194295}, {"type": "f1_weighted", "value": 44.26134318268762}, {"type": "main_score", "value": 44.69401479488904}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ko)", "type": "mteb/amazon_massive_scenario", "config": "ko", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 8.47343644922663}, {"type": "f1", "value": 2.9718553546241506}, {"type": "f1_weighted", "value": 3.9449930229420818}, {"type": "main_score", "value": 8.47343644922663}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ro)", "type": "mteb/amazon_massive_scenario", "config": "ro", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 42.92199058507061}, {"type": "f1", "value": 40.00185738475351}, {"type": "f1_weighted", "value": 42.53838435113089}, {"type": "main_score", "value": 42.92199058507061}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pl)", "type": "mteb/amazon_massive_scenario", "config": "pl", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 36.856086079354405}, {"type": "f1", "value": 35.85809216604705}, {"type": "f1_weighted", "value": 36.503220372495356}, {"type": "main_score", "value": 36.856086079354405}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ur)", "type": "mteb/amazon_massive_scenario", "config": "ur", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 7.427706792199058}, {"type": "f1", "value": 2.355649221281433}, {"type": "f1_weighted", "value": 2.3635737714890097}, {"type": "main_score", "value": 7.427706792199058}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hy)", "type": "mteb/amazon_massive_scenario", "config": "hy", "split": "test", "revision": "fad2c6e8459f9e1c45d9315f4953d921437d70f8"}, "metrics": [{"type": "accuracy", "value": 7.2494956287827845}, {"type": "f1", "value": 3.0267066892790786}, {"type": "f1_weighted", "value": 2.228737132597149}, {"type": "main_score", "value": 7.2494956287827845}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S (default)", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "main_score", "value": 22.3149940028344}, {"type": "v_measure", "value": 22.3149940028344}, {"type": "v_measure_std", "value": 1.184495521159966}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking (default)", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "59042f120c80e8afa9cdbb224f67076cec0fc9a7"}, "metrics": [{"type": "main_score", "value": 26.874241404290856}, {"type": "map", "value": 26.874241404290856}, {"type": "mrr", "value": 27.50127374810197}, {"type": "nAUC_map_diff1", "value": 20.72193125860396}, {"type": "nAUC_map_max", "value": -21.181361650744908}, {"type": "nAUC_map_std", "value": -21.136143423992458}, {"type": "nAUC_mrr_diff1", "value": 18.217458666186445}, {"type": "nAUC_mrr_max", "value": -14.657975701378914}, {"type": "nAUC_mrr_std", "value": -17.948245474413323}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB OpusparcusPC (de)", "type": "GEM/opusparcus", "config": "de", "split": "test", "revision": "9e9b1f8ef51616073f47f306f7f47dd91663f86a"}, "metrics": [{"type": "cosine_accuracy", "value": 99.90448901623687}, {"type": "cosine_accuracy_threshold", "value": 32.084010045061795}, {"type": "cosine_ap", "value": 100.0}, {"type": "cosine_f1", "value": 99.95222169135212}, {"type": "cosine_f1_threshold", "value": 32.084010045061795}, {"type": "cosine_precision", "value": 100.0}, {"type": "cosine_recall", "value": 99.90448901623687}, {"type": "dot_accuracy", "value": 99.90448901623687}, {"type": "dot_accuracy_threshold", "value": 14.194202811836867}, {"type": "dot_ap", "value": 100.0}, {"type": "dot_f1", "value": 99.95222169135212}, {"type": "dot_f1_threshold", "value": 14.194202811836867}, {"type": "dot_precision", "value": 100.0}, {"type": "dot_recall", "value": 99.90448901623687}, {"type": "euclidean_accuracy", "value": 99.90448901623687}, {"type": "euclidean_accuracy_threshold", "value": 116.50380181599331}, {"type": "euclidean_ap", "value": 100.0}, {"type": "euclidean_f1", "value": 99.95222169135212}, {"type": "euclidean_f1_threshold", "value": 116.50380181599331}, {"type": "euclidean_precision", "value": 100.0}, {"type": "euclidean_recall", "value": 99.90448901623687}, {"type": "main_score", "value": 100.0}, {"type": "manhattan_accuracy", "value": 99.90448901623687}, {"type": "manhattan_accuracy_threshold", "value": 5994.10849076798}, {"type": "manhattan_ap", "value": 100.0}, {"type": "manhattan_f1", "value": 99.95222169135212}, {"type": "manhattan_f1_threshold", "value": 5994.10849076798}, {"type": "manhattan_precision", "value": 100.0}, {"type": "manhattan_recall", "value": 99.90448901623687}, {"type": "max_accuracy", "value": 99.90448901623687}, {"type": "max_ap", "value": 100.0}, {"type": "max_f1", "value": 99.95222169135212}, {"type": "max_precision", "value": 100.0}, {"type": "max_recall", "value": 99.90448901623687}, {"type": "similarity_accuracy", "value": 99.90448901623687}, {"type": "similarity_accuracy_threshold", "value": 32.084010045061795}, {"type": "similarity_ap", "value": 100.0}, {"type": "similarity_f1", "value": 99.95222169135212}, {"type": "similarity_f1_threshold", "value": 32.084010045061795}, {"type": "similarity_precision", "value": 100.0}, {"type": "similarity_recall", "value": 99.90448901623687}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB OpusparcusPC (en)", "type": "GEM/opusparcus", "config": "en", "split": "test", "revision": "9e9b1f8ef51616073f47f306f7f47dd91663f86a"}, "metrics": [{"type": "cosine_accuracy", "value": 99.89816700610999}, {"type": "cosine_accuracy_threshold", "value": 40.08682069986206}, {"type": "cosine_ap", "value": 100.0}, {"type": "cosine_f1", "value": 99.9490575649516}, {"type": "cosine_f1_threshold", "value": 40.08682069986206}, {"type": "cosine_precision", "value": 100.0}, {"type": "cosine_recall", "value": 99.89816700610999}, {"type": "dot_accuracy", "value": 99.89816700610999}, {"type": "dot_accuracy_threshold", "value": 40.08682068226012}, {"type": "dot_ap", "value": 100.0}, {"type": "dot_f1", "value": 99.9490575649516}, {"type": "dot_f1_threshold", "value": 40.08682068226012}, {"type": "dot_precision", "value": 100.0}, {"type": "dot_recall", "value": 99.89816700610999}, {"type": "euclidean_accuracy", "value": 99.89816700610999}, {"type": "euclidean_accuracy_threshold", "value": 109.46519126990579}, {"type": "euclidean_ap", "value": 100.0}, {"type": "euclidean_f1", "value": 99.9490575649516}, {"type": "euclidean_f1_threshold", "value": 109.46519126990579}, {"type": "euclidean_precision", "value": 100.0}, {"type": "euclidean_recall", "value": 99.89816700610999}, {"type": "main_score", "value": 100.0}, {"type": "manhattan_accuracy", "value": 99.89816700610999}, {"type": "manhattan_accuracy_threshold", "value": 5586.837509625999}, {"type": "manhattan_ap", "value": 100.0}, {"type": "manhattan_f1", "value": 99.9490575649516}, {"type": "manhattan_f1_threshold", "value": 5586.837509625999}, {"type": "manhattan_precision", "value": 100.0}, {"type": "manhattan_recall", "value": 99.89816700610999}, {"type": "max_accuracy", "value": 99.89816700610999}, {"type": "max_ap", "value": 100.0}, {"type": "max_f1", "value": 99.9490575649516}, {"type": "max_precision", "value": 100.0}, {"type": "max_recall", "value": 99.89816700610999}, {"type": "similarity_accuracy", "value": 99.89816700610999}, {"type": "similarity_accuracy_threshold", "value": 40.08682069986206}, {"type": "similarity_ap", "value": 100.0}, {"type": "similarity_f1", "value": 99.9490575649516}, {"type": "similarity_f1_threshold", "value": 40.08682069986206}, {"type": "similarity_precision", "value": 100.0}, {"type": "similarity_recall", "value": 99.89816700610999}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB OpusparcusPC (fi)", "type": "GEM/opusparcus", "config": "fi", "split": "test", "revision": "9e9b1f8ef51616073f47f306f7f47dd91663f86a"}, "metrics": [{"type": "cosine_accuracy", "value": 99.89561586638831}, {"type": "cosine_accuracy_threshold", "value": -22.557142663724193}, {"type": "cosine_ap", "value": 99.99999999999999}, {"type": "cosine_f1", "value": 99.94778067885117}, {"type": "cosine_f1_threshold", "value": -22.557142663724193}, {"type": "cosine_precision", "value": 100.0}, {"type": "cosine_recall", "value": 99.89561586638831}, {"type": "dot_accuracy", "value": 99.89561586638831}, {"type": "dot_accuracy_threshold", "value": -22.55714265463469}, {"type": "dot_ap", "value": 99.99999999999999}, {"type": "dot_f1", "value": 99.94778067885117}, {"type": "dot_f1_threshold", "value": -22.55714265463469}, {"type": "dot_precision", "value": 100.0}, {"type": "dot_recall", "value": 99.89561586638831}, {"type": "euclidean_accuracy", "value": 99.89561586638831}, {"type": "euclidean_accuracy_threshold", "value": 156.13722151560276}, {"type": "euclidean_ap", "value": 99.99999999999999}, {"type": "euclidean_f1", "value": 99.94778067885117}, {"type": "euclidean_f1_threshold", "value": 156.13722151560276}, {"type": "euclidean_precision", "value": 100.0}, {"type": "euclidean_recall", "value": 99.89561586638831}, {"type": "main_score", "value": 99.99999999999999}, {"type": "manhattan_accuracy", "value": 99.89561586638831}, {"type": "manhattan_accuracy_threshold", "value": 8123.721240822417}, {"type": "manhattan_ap", "value": 99.99999999999999}, {"type": "manhattan_f1", "value": 99.94778067885117}, {"type": "manhattan_f1_threshold", "value": 8123.721240822417}, {"type": "manhattan_precision", "value": 100.0}, {"type": "manhattan_recall", "value": 99.89561586638831}, {"type": "max_accuracy", "value": 99.89561586638831}, {"type": "max_ap", "value": 99.99999999999999}, {"type": "max_f1", "value": 99.94778067885117}, {"type": "max_precision", "value": 100.0}, {"type": "max_recall", "value": 99.89561586638831}, {"type": "similarity_accuracy", "value": 99.89561586638831}, {"type": "similarity_accuracy_threshold", "value": -22.557142663724193}, {"type": "similarity_ap", "value": 99.99999999999999}, {"type": "similarity_f1", "value": 99.94778067885117}, {"type": "similarity_f1_threshold", "value": -22.557142663724193}, {"type": "similarity_precision", "value": 100.0}, {"type": "similarity_recall", "value": 99.89561586638831}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB OpusparcusPC (fr)", "type": "GEM/opusparcus", "config": "fr", "split": "test", "revision": "9e9b1f8ef51616073f47f306f7f47dd91663f86a"}, "metrics": [{"type": "cosine_accuracy", "value": 99.90069513406156}, {"type": "cosine_accuracy_threshold", "value": 4.276752354307001}, {"type": "cosine_ap", "value": 100.0}, {"type": "cosine_f1", "value": 99.95032290114257}, {"type": "cosine_f1_threshold", "value": 4.276752354307001}, {"type": "cosine_precision", "value": 100.0}, {"type": "cosine_recall", "value": 99.90069513406156}, {"type": "dot_accuracy", "value": 99.90069513406156}, {"type": "dot_accuracy_threshold", "value": 4.276752351391649}, {"type": "dot_ap", "value": 100.0}, {"type": "dot_f1", "value": 99.95032290114257}, {"type": "dot_f1_threshold", "value": 4.276752351391649}, {"type": "dot_precision", "value": 100.0}, {"type": "dot_recall", "value": 99.90069513406156}, {"type": "euclidean_accuracy", "value": 99.90069513406156}, {"type": "euclidean_accuracy_threshold", "value": 136.9020176878726}, {"type": "euclidean_ap", "value": 100.0}, {"type": "euclidean_f1", "value": 99.95032290114257}, {"type": "euclidean_f1_threshold", "value": 136.9020176878726}, {"type": "euclidean_precision", "value": 100.0}, {"type": "euclidean_recall", "value": 99.90069513406156}, {"type": "main_score", "value": 100.0}, {"type": "manhattan_accuracy", "value": 99.90069513406156}, {"type": "manhattan_accuracy_threshold", "value": 7063.200709566871}, {"type": "manhattan_ap", "value": 100.0}, {"type": "manhattan_f1", "value": 99.95032290114257}, {"type": "manhattan_f1_threshold", "value": 7063.200709566871}, {"type": "manhattan_precision", "value": 100.0}, {"type": "manhattan_recall", "value": 99.90069513406156}, {"type": "max_accuracy", "value": 99.90069513406156}, {"type": "max_ap", "value": 100.0}, {"type": "max_f1", "value": 99.95032290114257}, {"type": "max_precision", "value": 100.0}, {"type": "max_recall", "value": 99.90069513406156}, {"type": "similarity_accuracy", "value": 99.90069513406156}, {"type": "similarity_accuracy_threshold", "value": 4.276752354307001}, {"type": "similarity_ap", "value": 100.0}, {"type": "similarity_f1", "value": 99.95032290114257}, {"type": "similarity_f1_threshold", "value": 4.276752354307001}, {"type": "similarity_precision", "value": 100.0}, {"type": "similarity_recall", "value": 99.90069513406156}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB OpusparcusPC (ru)", "type": "GEM/opusparcus", "config": "ru", "split": "test", "revision": "9e9b1f8ef51616073f47f306f7f47dd91663f86a"}, "metrics": [{"type": "cosine_accuracy", "value": 99.90636704119851}, {"type": "cosine_accuracy_threshold", "value": 7.132103928293631}, {"type": "cosine_ap", "value": 100.0}, {"type": "cosine_f1", "value": 99.95316159250585}, {"type": "cosine_f1_threshold", "value": 7.132103928293631}, {"type": "cosine_precision", "value": 100.0}, {"type": "cosine_recall", "value": 99.90636704119851}, {"type": "dot_accuracy", "value": 99.90636704119851}, {"type": "dot_accuracy_threshold", "value": -13.447421954803113}, {"type": "dot_ap", "value": 100.0}, {"type": "dot_f1", "value": 99.95316159250585}, {"type": "dot_f1_threshold", "value": -13.447421954803113}, {"type": "dot_precision", "value": 100.0}, {"type": "dot_recall", "value": 99.90636704119851}, {"type": "euclidean_accuracy", "value": 99.90636704119851}, {"type": "euclidean_accuracy_threshold", "value": 133.89453353967028}, {"type": "euclidean_ap", "value": 100.0}, {"type": "euclidean_f1", "value": 99.95316159250585}, {"type": "euclidean_f1_threshold", "value": 133.89453353967028}, {"type": "euclidean_precision", "value": 100.0}, {"type": "euclidean_recall", "value": 99.90636704119851}, {"type": "main_score", "value": 100.0}, {"type": "manhattan_accuracy", "value": 99.90636704119851}, {"type": "manhattan_accuracy_threshold", "value": 7020.097656622158}, {"type": "manhattan_ap", "value": 100.0}, {"type": "manhattan_f1", "value": 99.95316159250585}, {"type": "manhattan_f1_threshold", "value": 7020.097656622158}, {"type": "manhattan_precision", "value": 100.0}, {"type": "manhattan_recall", "value": 99.90636704119851}, {"type": "max_accuracy", "value": 99.90636704119851}, {"type": "max_ap", "value": 100.0}, {"type": "max_f1", "value": 99.95316159250585}, {"type": "max_precision", "value": 100.0}, {"type": "max_recall", "value": 99.90636704119851}, {"type": "similarity_accuracy", "value": 99.90636704119851}, {"type": "similarity_accuracy_threshold", "value": 7.132103928293631}, {"type": "similarity_ap", "value": 100.0}, {"type": "similarity_f1", "value": 99.95316159250585}, {"type": "similarity_f1_threshold", "value": 7.132103928293631}, {"type": "similarity_precision", "value": 100.0}, {"type": "similarity_recall", "value": 99.90636704119851}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB OpusparcusPC (sv)", "type": "GEM/opusparcus", "config": "sv", "split": "test", "revision": "9e9b1f8ef51616073f47f306f7f47dd91663f86a"}, "metrics": [{"type": "cosine_accuracy", "value": 99.89440337909187}, {"type": "cosine_accuracy_threshold", "value": 0.2529676444121498}, {"type": "cosine_ap", "value": 100.0}, {"type": "cosine_f1", "value": 99.9471737982039}, {"type": "cosine_f1_threshold", "value": 0.2529676444121498}, {"type": "cosine_precision", "value": 100.0}, {"type": "cosine_recall", "value": 99.89440337909187}, {"type": "dot_accuracy", "value": 99.89440337909187}, {"type": "dot_accuracy_threshold", "value": -13.939213532311562}, {"type": "dot_ap", "value": 99.99999999999999}, {"type": "dot_f1", "value": 99.9471737982039}, {"type": "dot_f1_threshold", "value": -13.939213532311562}, {"type": "dot_precision", "value": 100.0}, {"type": "dot_recall", "value": 99.89440337909187}, {"type": "euclidean_accuracy", "value": 99.89440337909187}, {"type": "euclidean_accuracy_threshold", "value": 139.80163412046423}, {"type": "euclidean_ap", "value": 100.0}, {"type": "euclidean_f1", "value": 99.9471737982039}, {"type": "euclidean_f1_threshold", "value": 139.80163412046423}, {"type": "euclidean_precision", "value": 100.0}, {"type": "euclidean_recall", "value": 99.89440337909187}, {"type": "main_score", "value": 100.0}, {"type": "manhattan_accuracy", "value": 99.89440337909187}, {"type": "manhattan_accuracy_threshold", "value": 7259.639697084279}, {"type": "manhattan_ap", "value": 100.0}, {"type": "manhattan_f1", "value": 99.9471737982039}, {"type": "manhattan_f1_threshold", "value": 7259.639697084279}, {"type": "manhattan_precision", "value": 100.0}, {"type": "manhattan_recall", "value": 99.89440337909187}, {"type": "max_accuracy", "value": 99.89440337909187}, {"type": "max_ap", "value": 100.0}, {"type": "max_f1", "value": 99.9471737982039}, {"type": "max_precision", "value": 100.0}, {"type": "max_recall", "value": 99.89440337909187}, {"type": "similarity_accuracy", "value": 99.89440337909187}, {"type": "similarity_accuracy_threshold", "value": 0.2529676444121498}, {"type": "similarity_ap", "value": 100.0}, {"type": "similarity_f1", "value": 99.9471737982039}, {"type": "similarity_f1_threshold", "value": 0.2529676444121498}, {"type": "similarity_precision", "value": 100.0}, {"type": "similarity_recall", "value": 99.89440337909187}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval (default)", "type": "mteb/quora", "config": "default", "split": "test", "revision": "e4e08e0b7dbe3c8700f0daef558ff32256715259"}, "metrics": [{"type": "main_score", "value": 68.73}, {"type": "map_at_1", "value": 53.492}, {"type": "map_at_10", "value": 64.086}, {"type": "map_at_100", "value": 64.832}, {"type": "map_at_1000", "value": 64.88199999999999}, {"type": "map_at_20", "value": 64.537}, {"type": "map_at_3", "value": 61.592}, {"type": "map_at_5", "value": 63.113}, {"type": "mrr_at_1", "value": 61.56}, {"type": "mrr_at_10", "value": 68.92823412698384}, {"type": "mrr_at_100", "value": 69.28307943909826}, {"type": "mrr_at_1000", "value": 69.30426854775237}, {"type": "mrr_at_20", "value": 69.15371761666225}, {"type": "mrr_at_3", "value": 67.3866666666664}, {"type": "mrr_at_5", "value": 68.36666666666618}, {"type": "nauc_map_at_1000_diff1", "value": 67.15642759814821}, {"type": "nauc_map_at_1000_max", "value": 45.055780376792974}, {"type": "nauc_map_at_1000_std", "value": -9.604334727421541}, {"type": "nauc_map_at_100_diff1", "value": 67.15173583169253}, {"type": "nauc_map_at_100_max", "value": 45.04159938681548}, {"type": "nauc_map_at_100_std", "value": -9.621105481487115}, {"type": "nauc_map_at_10_diff1", "value": 67.21904799567723}, {"type": "nauc_map_at_10_max", "value": 44.64598524589752}, {"type": "nauc_map_at_10_std", "value": -10.240236577363671}, {"type": "nauc_map_at_1_diff1", "value": 69.75325378909568}, {"type": "nauc_map_at_1_max", "value": 39.57437605382559}, {"type": "nauc_map_at_1_std", "value": -13.560013524667186}, {"type": "nauc_map_at_20_diff1", "value": 67.18218534766027}, {"type": "nauc_map_at_20_max", "value": 44.898145457359036}, {"type": "nauc_map_at_20_std", "value": -9.853291926035132}, {"type": "nauc_map_at_3_diff1", "value": 67.33579825697572}, {"type": "nauc_map_at_3_max", "value": 43.434634746776254}, {"type": "nauc_map_at_3_std", "value": -11.533963319404025}, {"type": "nauc_map_at_5_diff1", "value": 67.29212861119778}, {"type": "nauc_map_at_5_max", "value": 44.149577446190584}, {"type": "nauc_map_at_5_std", "value": -10.846590188540638}, {"type": "nauc_mrr_at_1000_diff1", "value": 68.43853101345768}, {"type": "nauc_mrr_at_1000_max", "value": 48.23642231569019}, {"type": "nauc_mrr_at_1000_std", "value": -8.164139622888774}, {"type": "nauc_mrr_at_100_diff1", "value": 68.43230932580869}, {"type": "nauc_mrr_at_100_max", "value": 48.2366506280321}, {"type": "nauc_mrr_at_100_std", "value": -8.15719155689163}, {"type": "nauc_mrr_at_10_diff1", "value": 68.40804119736147}, {"type": "nauc_mrr_at_10_max", "value": 48.2668711810203}, {"type": "nauc_mrr_at_10_std", "value": -8.28336977621905}, {"type": "nauc_mrr_at_1_diff1", "value": 70.8152113865952}, {"type": "nauc_mrr_at_1_max", "value": 47.0802377233158}, {"type": "nauc_mrr_at_1_std", "value": -11.195273246909617}, {"type": "nauc_mrr_at_20_diff1", "value": 68.42041452964153}, {"type": "nauc_mrr_at_20_max", "value": 48.22983590171867}, {"type": "nauc_mrr_at_20_std", "value": -8.20351261044932}, {"type": "nauc_mrr_at_3_diff1", "value": 68.44729044448252}, {"type": "nauc_mrr_at_3_max", "value": 48.16311095038692}, {"type": "nauc_mrr_at_3_std", "value": -8.78728757717942}, {"type": "nauc_mrr_at_5_diff1", "value": 68.38338463498374}, {"type": "nauc_mrr_at_5_max", "value": 48.268101599089846}, {"type": "nauc_mrr_at_5_std", "value": -8.477703392514476}, {"type": "nauc_ndcg_at_1000_diff1", "value": 66.78555692495787}, {"type": "nauc_ndcg_at_1000_max", "value": 46.769939711081044}, {"type": "nauc_ndcg_at_1000_std", "value": -6.218846919120327}, {"type": "nauc_ndcg_at_100_diff1", "value": 66.59364370802282}, {"type": "nauc_ndcg_at_100_max", "value": 46.67887263322755}, {"type": "nauc_ndcg_at_100_std", "value": -6.293812979200834}, {"type": "nauc_ndcg_at_10_diff1", "value": 66.52295231581002}, {"type": "nauc_ndcg_at_10_max", "value": 46.11104447757736}, {"type": "nauc_ndcg_at_10_std", "value": -8.188391638090097}, {"type": "nauc_ndcg_at_1_diff1", "value": 70.71581893884627}, {"type": "nauc_ndcg_at_1_max", "value": 47.23054126591041}, {"type": "nauc_ndcg_at_1_std", "value": -11.16636548054171}, {"type": "nauc_ndcg_at_20_diff1", "value": 66.55690608251255}, {"type": "nauc_ndcg_at_20_max", "value": 46.32176620407243}, {"type": "nauc_ndcg_at_20_std", "value": -7.290514968713207}, {"type": "nauc_ndcg_at_3_diff1", "value": 66.56467011058169}, {"type": "nauc_ndcg_at_3_max", "value": 45.85553207058}, {"type": "nauc_ndcg_at_3_std", "value": -9.625769901172513}, {"type": "nauc_ndcg_at_5_diff1", "value": 66.54844587662231}, {"type": "nauc_ndcg_at_5_max", "value": 45.907121007430526}, {"type": "nauc_ndcg_at_5_std", "value": -9.10244355196338}, {"type": "nauc_precision_at_1000_diff1", "value": -22.422463003175896}, {"type": "nauc_precision_at_1000_max", "value": 4.7758645718637895}, {"type": "nauc_precision_at_1000_std", "value": 17.79812492946632}, {"type": "nauc_precision_at_100_diff1", "value": -13.917229261278852}, {"type": "nauc_precision_at_100_max", "value": 12.29030615723118}, {"type": "nauc_precision_at_100_std", "value": 17.911028283874135}, {"type": "nauc_precision_at_10_diff1", "value": 6.590674643516733}, {"type": "nauc_precision_at_10_max", "value": 24.19926960425754}, {"type": "nauc_precision_at_10_std", "value": 10.06424163424373}, {"type": "nauc_precision_at_1_diff1", "value": 70.71581893884627}, {"type": "nauc_precision_at_1_max", "value": 47.23054126591041}, {"type": "nauc_precision_at_1_std", "value": -11.16636548054171}, {"type": "nauc_precision_at_20_diff1", "value": -2.483678970625915}, {"type": "nauc_precision_at_20_max", "value": 19.72734209605925}, {"type": "nauc_precision_at_20_std", "value": 14.191677013682849}, {"type": "nauc_precision_at_3_diff1", "value": 29.73727057888939}, {"type": "nauc_precision_at_3_max", "value": 34.568730451871346}, {"type": "nauc_precision_at_3_std", "value": 1.4403998107739213}, {"type": "nauc_precision_at_5_diff1", "value": 18.2542788731059}, {"type": "nauc_precision_at_5_max", "value": 29.292888170520108}, {"type": "nauc_precision_at_5_std", "value": 5.510094141692317}, {"type": "nauc_recall_at_1000_diff1", "value": 57.196928991569266}, {"type": "nauc_recall_at_1000_max", "value": 46.153589753933446}, {"type": "nauc_recall_at_1000_std", "value": 30.748423976943613}, {"type": "nauc_recall_at_100_diff1", "value": 57.976992158794886}, {"type": "nauc_recall_at_100_max", "value": 45.79893337773414}, {"type": "nauc_recall_at_100_std", "value": 13.253969225652396}, {"type": "nauc_recall_at_10_diff1", "value": 60.22299195797645}, {"type": "nauc_recall_at_10_max", "value": 43.85065064759132}, {"type": "nauc_recall_at_10_std", "value": -3.125491914491259}, {"type": "nauc_recall_at_1_diff1", "value": 69.75325378909568}, {"type": "nauc_recall_at_1_max", "value": 39.57437605382559}, {"type": "nauc_recall_at_1_std", "value": -13.560013524667186}, {"type": "nauc_recall_at_20_diff1", "value": 59.1680127262332}, {"type": "nauc_recall_at_20_max", "value": 44.06962727874914}, {"type": "nauc_recall_at_20_std", "value": 1.7610688570268762}, {"type": "nauc_recall_at_3_diff1", "value": 62.75286406178069}, {"type": "nauc_recall_at_3_max", "value": 42.40300188251299}, {"type": "nauc_recall_at_3_std", "value": -8.94270893049646}, {"type": "nauc_recall_at_5_diff1", "value": 61.57224817120582}, {"type": "nauc_recall_at_5_max", "value": 43.2469875881082}, {"type": "nauc_recall_at_5_std", "value": -6.712607605292967}, {"type": "ndcg_at_1", "value": 61.61}, {"type": "ndcg_at_10", "value": 68.73}, {"type": "ndcg_at_100", "value": 71.281}, {"type": "ndcg_at_1000", "value": 72.209}, {"type": "ndcg_at_20", "value": 69.862}, {"type": "ndcg_at_3", "value": 65.35}, {"type": "ndcg_at_5", "value": 67.099}, {"type": "precision_at_1", "value": 61.61}, {"type": "precision_at_10", "value": 10.295}, {"type": "precision_at_100", "value": 1.2670000000000001}, {"type": "precision_at_1000", "value": 0.14100000000000001}, {"type": "precision_at_20", "value": 5.583}, {"type": "precision_at_3", "value": 28.157}, {"type": "precision_at_5", "value": 18.644}, {"type": "recall_at_1", "value": 53.492}, {"type": "recall_at_10", "value": 77.395}, {"type": "recall_at_100", "value": 87.822}, {"type": "recall_at_1000", "value": 94.039}, {"type": "recall_at_20", "value": 81.381}, {"type": "recall_at_3", "value": 67.657}, {"type": "recall_at_5", "value": 72.494}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering (default)", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "main_score", "value": 22.18693423438157}, {"type": "v_measure", "value": 22.18693423438157}, {"type": "v_measure_std", "value": 3.362608784471836}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R (default)", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cosine_pearson", "value": 74.25579384618342}, {"type": "cosine_spearman", "value": 67.31903429944056}, {"type": "euclidean_pearson", "value": 71.84781550612432}, {"type": "euclidean_spearman", "value": 67.31913348808827}, {"type": "main_score", "value": 67.31903429944056}, {"type": "manhattan_pearson", "value": 71.93525335001107}, {"type": "manhattan_spearman", "value": 67.44731252485444}, {"type": "pearson", "value": 74.25579384618342}, {"type": "spearman", "value": 67.31903429944056}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12 (default)", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cosine_pearson", "value": 70.45282392047417}, {"type": "cosine_spearman", "value": 57.66176503826067}, {"type": "euclidean_pearson", "value": 68.20476513300197}, {"type": "euclidean_spearman", "value": 57.662984752186595}, {"type": "main_score", "value": 57.66176503826067}, {"type": "manhattan_pearson", "value": 68.35595302570229}, {"type": "manhattan_spearman", "value": 57.78214901099006}, {"type": "pearson", "value": 70.45282392047417}, {"type": "spearman", "value": 57.66176503826067}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13 (default)", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cosine_pearson", "value": 66.72224934737348}, {"type": "cosine_spearman", "value": 71.89696855506867}, {"type": "euclidean_pearson", "value": 70.4712630269631}, {"type": "euclidean_spearman", "value": 71.89698079206684}, {"type": "main_score", "value": 71.89696855506867}, {"type": "manhattan_pearson", "value": 70.45860743861545}, {"type": "manhattan_spearman", "value": 71.91608445555363}, {"type": "pearson", "value": 66.72224934737348}, {"type": "spearman", "value": 71.89696855506867}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14 (default)", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cosine_pearson", "value": 70.34249555730298}, {"type": "cosine_spearman", "value": 69.53679034910807}, {"type": "euclidean_pearson", "value": 71.56701694057745}, {"type": "euclidean_spearman", "value": 69.5367806640627}, {"type": "main_score", "value": 69.53679034910807}, {"type": "manhattan_pearson", "value": 71.53194206589868}, {"type": "manhattan_spearman", "value": 69.52240262783113}, {"type": "pearson", "value": 70.34249555730298}, {"type": "spearman", "value": 69.53679034910807}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15 (default)", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cosine_pearson", "value": 68.33547250158846}, {"type": "cosine_spearman", "value": 73.96543736110634}, {"type": "euclidean_pearson", "value": 72.63926797717605}, {"type": "euclidean_spearman", "value": 73.96543799049243}, {"type": "main_score", "value": 73.96543736110634}, {"type": "manhattan_pearson", "value": 72.6308651035737}, {"type": "manhattan_spearman", "value": 73.99784893840472}, {"type": "pearson", "value": 68.33547250158846}, {"type": "spearman", "value": 73.96543736110634}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16 (default)", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cosine_pearson", "value": 62.50064232309498}, {"type": "cosine_spearman", "value": 69.99690285087063}, {"type": "euclidean_pearson", "value": 67.7773080753282}, {"type": "euclidean_spearman", "value": 69.99717504340504}, {"type": "main_score", "value": 69.99690285087063}, {"type": "manhattan_pearson", "value": 67.77737269625732}, {"type": "manhattan_spearman", "value": 70.05662507231811}, {"type": "pearson", "value": 62.50064232309498}, {"type": "spearman", "value": 69.99690285087063}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-de)", "type": "mteb/sts17-crosslingual-sts", "config": "en-de", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": -4.639974351143124}, {"type": "cosine_spearman", "value": -5.70963417137641}, {"type": "euclidean_pearson", "value": -4.671269689471623}, {"type": "euclidean_spearman", "value": -5.70963417137641}, {"type": "main_score", "value": -5.70963417137641}, {"type": "manhattan_pearson", "value": -4.822356012695697}, {"type": "manhattan_spearman", "value": -5.805771748799997}, {"type": "pearson", "value": -4.639974351143124}, {"type": "spearman", "value": -5.70963417137641}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 75.07706637430398}, {"type": "cosine_spearman", "value": 78.81834383119009}, {"type": "euclidean_pearson", "value": 78.33040815719426}, {"type": "euclidean_spearman", "value": 78.81922098296683}, {"type": "main_score", "value": 78.81834383119009}, {"type": "manhattan_pearson", "value": 78.25386282376627}, {"type": "manhattan_spearman", "value": 78.73096351789457}, {"type": "pearson", "value": 75.07706637430398}, {"type": "spearman", "value": 78.81834383119009}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (it-en)", "type": "mteb/sts17-crosslingual-sts", "config": "it-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": -8.034513096828757}, {"type": "cosine_spearman", "value": -8.94071782108332}, {"type": "euclidean_pearson", "value": -8.362035046748408}, {"type": "euclidean_spearman", "value": -8.94071782108332}, {"type": "main_score", "value": -8.94071782108332}, {"type": "manhattan_pearson", "value": -8.58384659065939}, {"type": "manhattan_spearman", "value": -9.022478967496742}, {"type": "pearson", "value": -8.034513096828757}, {"type": "spearman", "value": -8.94071782108332}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-en)", "type": "mteb/sts17-crosslingual-sts", "config": "es-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": -9.309746585888194}, {"type": "cosine_spearman", "value": -9.989532291941243}, {"type": "euclidean_pearson", "value": -9.113663493693515}, {"type": "euclidean_spearman", "value": -9.989532291941243}, {"type": "main_score", "value": -9.989532291941243}, {"type": "manhattan_pearson", "value": -9.123108445100232}, {"type": "manhattan_spearman", "value": -10.02555353386953}, {"type": "pearson", "value": -9.309746585888194}, {"type": "spearman", "value": -9.989532291941243}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-es)", "type": "mteb/sts17-crosslingual-sts", "config": "es-es", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 49.203212653579534}, {"type": "cosine_spearman", "value": 62.17745071362616}, {"type": "euclidean_pearson", "value": 60.12172084869311}, {"type": "euclidean_spearman", "value": 62.17745071362616}, {"type": "main_score", "value": 62.17745071362616}, {"type": "manhattan_pearson", "value": 60.03123674358504}, {"type": "manhattan_spearman", "value": 62.08054980165127}, {"type": "pearson", "value": 49.203212653579534}, {"type": "spearman", "value": 62.17745071362616}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (fr-en)", "type": "mteb/sts17-crosslingual-sts", "config": "fr-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": -3.796131822561097}, {"type": "cosine_spearman", "value": -3.6829417954942962}, {"type": "euclidean_pearson", "value": -3.9617579449787215}, {"type": "euclidean_spearman", "value": -3.6829417954942962}, {"type": "main_score", "value": -3.6829417954942962}, {"type": "manhattan_pearson", "value": -4.229917664747983}, {"type": "manhattan_spearman", "value": -3.8304347521413575}, {"type": "pearson", "value": -3.796131822561097}, {"type": "spearman", "value": -3.6829417954942962}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ko-ko)", "type": "mteb/sts17-crosslingual-sts", "config": "ko-ko", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 9.70401307418669}, {"type": "cosine_spearman", "value": 7.125994342518046}, {"type": "euclidean_pearson", "value": 8.692865519584803}, {"type": "euclidean_spearman", "value": 7.086314063560257}, {"type": "main_score", "value": 7.125994342518046}, {"type": "manhattan_pearson", "value": 8.688214277742162}, {"type": "manhattan_spearman", "value": 6.951151829297476}, {"type": "pearson", "value": 9.70401307418669}, {"type": "spearman", "value": 7.125994342518046}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-tr)", "type": "mteb/sts17-crosslingual-sts", "config": "en-tr", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": -12.59835322441286}, {"type": "cosine_spearman", "value": -17.99707926594973}, {"type": "euclidean_pearson", "value": -14.34931127125891}, {"type": "euclidean_spearman", "value": -17.99707926594973}, {"type": "main_score", "value": -17.99707926594973}, {"type": "manhattan_pearson", "value": -14.599702365227513}, {"type": "manhattan_spearman", "value": -18.256327942493844}, {"type": "pearson", "value": -12.59835322441286}, {"type": "spearman", "value": -17.99707926594973}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (nl-en)", "type": "mteb/sts17-crosslingual-sts", "config": "nl-en", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": -0.06664551245524106}, {"type": "cosine_spearman", "value": -0.891108084699552}, {"type": "euclidean_pearson", "value": 0.2657845183657392}, {"type": "euclidean_spearman", "value": -0.891108084699552}, {"type": "main_score", "value": -0.891108084699552}, {"type": "manhattan_pearson", "value": 0.120752189864216}, {"type": "manhattan_spearman", "value": -0.8531297054534491}, {"type": "pearson", "value": -0.06664551245524106}, {"type": "spearman", "value": -0.891108084699552}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ar-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "ar-ar", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": 9.587866133715462}, {"type": "cosine_spearman", "value": 10.240476793789082}, {"type": "euclidean_pearson", "value": 9.587866133709937}, {"type": "euclidean_spearman", "value": 10.299853867377841}, {"type": "main_score", "value": 10.240476793789082}, {"type": "manhattan_pearson", "value": 9.587479080379996}, {"type": "manhattan_spearman", "value": 10.289638886132417}, {"type": "pearson", "value": 9.587866133715462}, {"type": "spearman", "value": 10.240476793789082}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "en-ar", "split": "test", "revision": "faeb762787bd10488a50c8b5be4a3b82e411949c"}, "metrics": [{"type": "cosine_pearson", "value": -11.455833153778357}, {"type": "cosine_spearman", "value": -12.120168687487281}, {"type": "euclidean_pearson", "value": -4.8404233986021}, {"type": "euclidean_spearman", "value": -5.629445269503656}, {"type": "main_score", "value": -12.120168687487281}, {"type": "manhattan_pearson", "value": -5.802510530492165}, {"type": "manhattan_spearman", "value": -4.129636012427943}, {"type": "pearson", "value": -11.455833153778357}, {"type": "spearman", "value": -12.120168687487281}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark (default)", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cosine_pearson", "value": 67.09018720017058}, {"type": "cosine_spearman", "value": 67.6086401236391}, {"type": "euclidean_pearson", "value": 69.37492911426406}, {"type": "euclidean_spearman", "value": 67.60865860108962}, {"type": "main_score", "value": 67.6086401236391}, {"type": "manhattan_pearson", "value": 69.34659483682688}, {"type": "manhattan_spearman", "value": 67.592012200863}, {"type": "pearson", "value": 67.09018720017058}, {"type": "spearman", "value": 67.6086401236391}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (it)", "type": "mteb/stsb_multi_mt", "config": "it", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 44.27233827248044}, {"type": "cosine_spearman", "value": 49.47510261384346}, {"type": "euclidean_pearson", "value": 49.40398312290145}, {"type": "euclidean_spearman", "value": 49.47500131889738}, {"type": "main_score", "value": 49.47510261384346}, {"type": "manhattan_pearson", "value": 49.341548618895466}, {"type": "manhattan_spearman", "value": 49.4424887001277}, {"type": "pearson", "value": 44.27233827248044}, {"type": "spearman", "value": 49.47510261384346}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (nl)", "type": "mteb/stsb_multi_mt", "config": "nl", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 44.79696340221503}, {"type": "cosine_spearman", "value": 48.84897104878986}, {"type": "euclidean_pearson", "value": 49.324260285317855}, {"type": "euclidean_spearman", "value": 48.848924358139364}, {"type": "main_score", "value": 48.84897104878986}, {"type": "manhattan_pearson", "value": 49.33647165074528}, {"type": "manhattan_spearman", "value": 48.88344266774654}, {"type": "pearson", "value": 44.79696340221503}, {"type": "spearman", "value": 48.84897104878986}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (en)", "type": "mteb/stsb_multi_mt", "config": "en", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 67.09018713920469}, {"type": "cosine_spearman", "value": 67.6086401236391}, {"type": "euclidean_pearson", "value": 69.37492906687476}, {"type": "euclidean_spearman", "value": 67.60865860108962}, {"type": "main_score", "value": 67.6086401236391}, {"type": "manhattan_pearson", "value": 69.34659479129859}, {"type": "manhattan_spearman", "value": 67.592012200863}, {"type": "pearson", "value": 67.09018713920469}, {"type": "spearman", "value": 67.6086401236391}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (es)", "type": "mteb/stsb_multi_mt", "config": "es", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 42.895339590180996}, {"type": "cosine_spearman", "value": 52.21235147253785}, {"type": "euclidean_pearson", "value": 49.413874942919264}, {"type": "euclidean_spearman", "value": 52.21203780406665}, {"type": "main_score", "value": 52.21235147253785}, {"type": "manhattan_pearson", "value": 49.276873027104855}, {"type": "manhattan_spearman", "value": 52.16409604469493}, {"type": "pearson", "value": 42.895339590180996}, {"type": "spearman", "value": 52.21235147253785}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (ru)", "type": "mteb/stsb_multi_mt", "config": "ru", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 10.389925450857834}, {"type": "cosine_spearman", "value": 8.908138291052701}, {"type": "euclidean_pearson", "value": 9.890367033199064}, {"type": "euclidean_spearman", "value": 8.770978113601167}, {"type": "main_score", "value": 8.908138291052701}, {"type": "manhattan_pearson", "value": 9.899760056143247}, {"type": "manhattan_spearman", "value": 9.030970134574098}, {"type": "pearson", "value": 10.389925450857834}, {"type": "spearman", "value": 8.908138291052701}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (zh)", "type": "mteb/stsb_multi_mt", "config": "zh", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 3.2165863331249414}, {"type": "cosine_spearman", "value": 0.7975692702633864}, {"type": "euclidean_pearson", "value": 2.0618436826186066}, {"type": "euclidean_spearman", "value": 0.5027230247162311}, {"type": "main_score", "value": 0.7975692702633864}, {"type": "manhattan_pearson", "value": 2.0514189695530325}, {"type": "manhattan_spearman", "value": 0.39577079994867403}, {"type": "pearson", "value": 3.2165863331249414}, {"type": "spearman", "value": 0.7975692702633864}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (fr)", "type": "mteb/stsb_multi_mt", "config": "fr", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 46.17508747479316}, {"type": "cosine_spearman", "value": 51.086872268140816}, {"type": "euclidean_pearson", "value": 51.41891364659744}, {"type": "euclidean_spearman", "value": 51.08665283035928}, {"type": "main_score", "value": 51.086872268140816}, {"type": "manhattan_pearson", "value": 51.361372778247606}, {"type": "manhattan_spearman", "value": 51.045873818882924}, {"type": "pearson", "value": 46.17508747479316}, {"type": "spearman", "value": 51.086872268140816}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (pt)", "type": "mteb/stsb_multi_mt", "config": "pt", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 40.639680830613514}, {"type": "cosine_spearman", "value": 47.99664145034049}, {"type": "euclidean_pearson", "value": 46.61505913234052}, {"type": "euclidean_spearman", "value": 47.99654723025848}, {"type": "main_score", "value": 47.99664145034049}, {"type": "manhattan_pearson", "value": 46.594310151466146}, {"type": "manhattan_spearman", "value": 47.96444879548329}, {"type": "pearson", "value": 40.639680830613514}, {"type": "spearman", "value": 47.99664145034049}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (pl)", "type": "mteb/stsb_multi_mt", "config": "pl", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 46.72373117676612}, {"type": "cosine_spearman", "value": 52.865236864827345}, {"type": "euclidean_pearson", "value": 52.45181901546032}, {"type": "euclidean_spearman", "value": 52.86458795625298}, {"type": "main_score", "value": 52.865236864827345}, {"type": "manhattan_pearson", "value": 52.44185889658423}, {"type": "manhattan_spearman", "value": 52.78491169411964}, {"type": "pearson", "value": 46.72373117676612}, {"type": "spearman", "value": 52.865236864827345}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (de)", "type": "mteb/stsb_multi_mt", "config": "de", "split": "test", "revision": "29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c"}, "metrics": [{"type": "cosine_pearson", "value": 48.138397241162444}, {"type": "cosine_spearman", "value": 51.285304430536335}, {"type": "euclidean_pearson", "value": 51.803064906612896}, {"type": "euclidean_spearman", "value": 51.28542208854524}, {"type": "main_score", "value": 51.285304430536335}, {"type": "manhattan_pearson", "value": 51.819864335986956}, {"type": "manhattan_spearman", "value": 51.32840976987932}, {"type": "pearson", "value": 48.138397241162444}, {"type": "spearman", "value": 51.285304430536335}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR (default)", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "main_score", "value": 60.74844680566163}, {"type": "map", "value": 60.74844680566163}, {"type": "mrr", "value": 84.68450485607349}, {"type": "nAUC_map_diff1", "value": 13.078055417971749}, {"type": "nAUC_map_max", "value": 47.937301739074215}, {"type": "nAUC_map_std", "value": 34.26921463872339}, {"type": "nAUC_mrr_diff1", "value": 42.90446482292105}, {"type": "nAUC_mrr_max", "value": 59.75684998106037}, {"type": "nAUC_mrr_std", "value": 30.107306162191268}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions (default)", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cosine_accuracy", "value": 99.44851485148514}, {"type": "cosine_accuracy_threshold", "value": 95.47240059357654}, {"type": "cosine_ap", "value": 68.22522420879186}, {"type": "cosine_f1", "value": 65.92635885447106}, {"type": "cosine_f1_threshold", "value": 94.98664208777299}, {"type": "cosine_precision", "value": 79.32489451476793}, {"type": "cosine_recall", "value": 56.39999999999999}, {"type": "dot_accuracy", "value": 99.44851485148514}, {"type": "dot_accuracy_threshold", "value": 95.47240056095825}, {"type": "dot_ap", "value": 68.22522420879186}, {"type": "dot_f1", "value": 65.92635885447106}, {"type": "dot_f1_threshold", "value": 94.98664205438727}, {"type": "dot_precision", "value": 79.32489451476793}, {"type": "dot_recall", "value": 56.39999999999999}, {"type": "euclidean_accuracy", "value": 99.44851485148514}, {"type": "euclidean_accuracy_threshold", "value": 30.091857225199625}, {"type": "euclidean_ap", "value": 68.22522420879186}, {"type": "euclidean_f1", "value": 65.92635885447106}, {"type": "euclidean_f1_threshold", "value": 31.664989847761138}, {"type": "euclidean_precision", "value": 79.32489451476793}, {"type": "euclidean_recall", "value": 56.39999999999999}, {"type": "main_score", "value": 68.28159512609737}, {"type": "manhattan_accuracy", "value": 99.44851485148514}, {"type": "manhattan_accuracy_threshold", "value": 1519.5971755477553}, {"type": "manhattan_ap", "value": 68.28159512609737}, {"type": "manhattan_f1", "value": 66.05818596691385}, {"type": "manhattan_f1_threshold", "value": 1628.6210010065347}, {"type": "manhattan_precision", "value": 76.89243027888446}, {"type": "manhattan_recall", "value": 57.9}, {"type": "max_accuracy", "value": 99.44851485148514}, {"type": "max_ap", "value": 68.28159512609737}, {"type": "max_f1", "value": 66.05818596691385}, {"type": "max_precision", "value": 79.32489451476793}, {"type": "max_recall", "value": 57.9}, {"type": "similarity_accuracy", "value": 99.44851485148514}, {"type": "similarity_accuracy_threshold", "value": 95.47240059357654}, {"type": "similarity_ap", "value": 68.22522420879186}, {"type": "similarity_f1", "value": 65.92635885447106}, {"type": "similarity_f1_threshold", "value": 94.98664208777299}, {"type": "similarity_precision", "value": 79.32489451476793}, {"type": "similarity_recall", "value": 56.39999999999999}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering (default)", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "main_score", "value": 29.30513928170411}, {"type": "v_measure", "value": 29.30513928170411}, {"type": "v_measure_std", "value": 4.167908098359504}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions (default)", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "main_score", "value": 41.60577705014483}, {"type": "map", "value": 41.60577705014483}, {"type": "mrr", "value": 42.046595153212806}, {"type": "nAUC_map_diff1", "value": 29.435613304703427}, {"type": "nAUC_map_max", "value": 23.041089610073772}, {"type": "nAUC_map_std", "value": 4.187983544965867}, {"type": "nAUC_mrr_diff1", "value": 28.24912241668722}, {"type": "nAUC_mrr_max", "value": 23.844594928925574}, {"type": "nAUC_mrr_std", "value": 5.300127051350153}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification (default)", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 61.03515625}, {"type": "ap", "value": 10.357109818250033}, {"type": "ap_weighted", "value": 10.357109818250033}, {"type": "f1", "value": 46.79659702416427}, {"type": "f1_weighted", "value": 69.34093343990779}, {"type": "main_score", "value": 61.03515625}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification (default)", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 54.88964346349745}, {"type": "f1", "value": 54.88849570146398}, {"type": "f1_weighted", "value": 54.0202173220827}, {"type": "main_score", "value": 54.88964346349745}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering (default)", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "main_score", "value": 25.77793337013197}, {"type": "v_measure", "value": 25.77793337013197}, {"type": "v_measure_std", "value": 1.7036625620777253}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015 (default)", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cosine_accuracy", "value": 83.50718245216666}, {"type": "cosine_accuracy_threshold", "value": 92.85797990005872}, {"type": "cosine_ap", "value": 64.57501485077721}, {"type": "cosine_f1", "value": 61.107669433775236}, {"type": "cosine_f1_threshold", "value": 90.91770372653797}, {"type": "cosine_precision", "value": 57.60336370007008}, {"type": "cosine_recall", "value": 65.06596306068602}, {"type": "dot_accuracy", "value": 83.50718245216666}, {"type": "dot_accuracy_threshold", "value": 92.85797986316105}, {"type": "dot_ap", "value": 64.57501485077721}, {"type": "dot_f1", "value": 61.107669433775236}, {"type": "dot_f1_threshold", "value": 90.91770369108825}, {"type": "dot_precision", "value": 57.60336370007008}, {"type": "dot_recall", "value": 65.06596306068602}, {"type": "euclidean_accuracy", "value": 83.50718245216666}, {"type": "euclidean_accuracy_threshold", "value": 37.794231852628414}, {"type": "euclidean_ap", "value": 64.57501485077721}, {"type": "euclidean_f1", "value": 61.107669433775236}, {"type": "euclidean_f1_threshold", "value": 42.61993960299444}, {"type": "euclidean_precision", "value": 57.60336370007008}, {"type": "euclidean_recall", "value": 65.06596306068602}, {"type": "main_score", "value": 64.57501485077721}, {"type": "manhattan_accuracy", "value": 83.48930082851524}, {"type": "manhattan_accuracy_threshold", "value": 1897.2244120282544}, {"type": "manhattan_ap", "value": 64.55099351854031}, {"type": "manhattan_f1", "value": 61.062609129458714}, {"type": "manhattan_f1_threshold", "value": 2160.535839208718}, {"type": "manhattan_precision", "value": 57.89971617786187}, {"type": "manhattan_recall", "value": 64.5910290237467}, {"type": "max_accuracy", "value": 83.50718245216666}, {"type": "max_ap", "value": 64.57501485077721}, {"type": "max_f1", "value": 61.107669433775236}, {"type": "max_precision", "value": 57.89971617786187}, {"type": "max_recall", "value": 65.06596306068602}, {"type": "similarity_accuracy", "value": 83.50718245216666}, {"type": "similarity_accuracy_threshold", "value": 92.85797990005872}, {"type": "similarity_ap", "value": 64.57501485077721}, {"type": "similarity_f1", "value": 61.107669433775236}, {"type": "similarity_f1_threshold", "value": 90.91770372653797}, {"type": "similarity_precision", "value": 57.60336370007008}, {"type": "similarity_recall", "value": 65.06596306068602}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus (default)", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cosine_accuracy", "value": 86.35463965537315}, {"type": "cosine_accuracy_threshold", "value": 93.93182168113243}, {"type": "cosine_ap", "value": 79.17988590079685}, {"type": "cosine_f1", "value": 71.77413258749716}, {"type": "cosine_f1_threshold", "value": 92.7978491290961}, {"type": "cosine_precision", "value": 70.48997772828508}, {"type": "cosine_recall", "value": 73.10594394825993}, {"type": "dot_accuracy", "value": 86.35463965537315}, {"type": "dot_accuracy_threshold", "value": 93.9318216501234}, {"type": "dot_ap", "value": 79.17988590079685}, {"type": "dot_f1", "value": 71.77413258749716}, {"type": "dot_f1_threshold", "value": 92.79784909821515}, {"type": "dot_precision", "value": 70.48997772828508}, {"type": "dot_recall", "value": 73.10594394825993}, {"type": "euclidean_accuracy", "value": 86.35463965537315}, {"type": "euclidean_accuracy_threshold", "value": 34.837274051981524}, {"type": "euclidean_ap", "value": 79.17988575609482}, {"type": "euclidean_f1", "value": 71.77413258749716}, {"type": "euclidean_f1_threshold", "value": 37.95299953339363}, {"type": "euclidean_precision", "value": 70.48997772828508}, {"type": "euclidean_recall", "value": 73.10594394825993}, {"type": "main_score", "value": 79.17988590079685}, {"type": "manhattan_accuracy", "value": 86.36046105483757}, {"type": "manhattan_accuracy_threshold", "value": 1771.5702122947137}, {"type": "manhattan_ap", "value": 79.16559289648251}, {"type": "manhattan_f1", "value": 71.8502354427472}, {"type": "manhattan_f1_threshold", "value": 1912.7281549009595}, {"type": "manhattan_precision", "value": 71.45359019264448}, {"type": "manhattan_recall", "value": 72.25130890052355}, {"type": "max_accuracy", "value": 86.36046105483757}, {"type": "max_ap", "value": 79.17988590079685}, {"type": "max_f1", "value": 71.8502354427472}, {"type": "max_precision", "value": 71.45359019264448}, {"type": "max_recall", "value": 73.10594394825993}, {"type": "similarity_accuracy", "value": 86.35463965537315}, {"type": "similarity_accuracy_threshold", "value": 93.93182168113243}, {"type": "similarity_ap", "value": 79.17988590079685}, {"type": "similarity_f1", "value": 71.77413258749716}, {"type": "similarity_f1_threshold", "value": 92.7978491290961}, {"type": "similarity_precision", "value": 70.48997772828508}, {"type": "similarity_recall", "value": 73.10594394825993}]}]}]}
dataset
null
528
EleutherAI/pythia-6.9b-deduped
EleutherAI
text-generation
[ "transformers", "pytorch", "gpt_neox", "text-generation", "causal-lm", "pythia", "en", "dataset:EleutherAI/the_pile_deduplicated", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-02-25T17:56:57Z
2023-06-08T13:05:19+00:00
10,266
8
--- datasets: - EleutherAI/the_pile_deduplicated language: - en license: apache-2.0 tags: - pytorch - causal-lm - pythia --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-6.9B-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-6.9B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-6.9B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-6.9B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-6.9B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-6.9B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-6.9B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-6.9B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-6.9B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
[ "SCIQ" ]
Non_BioNLP
The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-6.9B-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-6.9B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-6.9B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-6.9B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-6.9B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-6.9B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-6.9B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-6.9B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-6.9B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
{"datasets": ["EleutherAI/the_pile_deduplicated"], "language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm", "pythia"]}
dataset
null
529
abazoge/DrBERT-4096
abazoge
fill-mask
[ "transformers", "pytorch", "longformer", "fill-mask", "biomedical", "medical", "clinical", "life science", "fr", "dataset:Dr-BERT/NACHOS", "arxiv:2402.16689", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-01-17T13:43:34Z
2024-04-15T09:12:53+00:00
23
0
--- datasets: - Dr-BERT/NACHOS language: - fr library_name: transformers license: apache-2.0 tags: - biomedical - medical - clinical - life science --- # DrLongformer <span style="font-size:larger;">**DrLongformer**</span> is a French pretrained Longformer model based on Clinical-Longformer that was further pretrained on the NACHOS dataset (same dataset as [DrBERT](https://github.com/qanastek/DrBERT)). This model allows up to 4,096 tokens as input. DrLongformer consistently outperforms medical BERT-based models across most downstream tasks regardless of sequence length, except on NER tasks. Evaluated downstream tasks cover named entity recognition (NER), question answering (MCQA), Semantic textual similarity (STS) and text classification tasks (CLS). For more details, please refer to our paper: [Adaptation of Biomedical and Clinical Pretrained Models to French Long Documents: A Comparative Study](). ### Model pretraining We explored multiple strategies for the adaptation of Longformer models to the French medical domain: - Further pretraining of English clinical Longformer on French medical data. - Converting a French medical BERT model to the Longformer architecture. - Pretraining a Longformer from scratch on French medical data. All Pretraining scripts to reproduce the experiments are available in this Github repository: [DrLongformer](https://github.com/abazoge/DrLongformer). For the `from scratch` and `further pretraining` strategies, the training scripts are the same as [DrBERT](https://github.com/qanastek/DrBERT), only the bash scripts are different and available in this repository. All models were trained on the [Jean Zay](http://www.idris.fr/jean-zay/) French supercomputer. | Model name | Corpus | Pretraining strategy | Sequence Length | Model URL | | :------: | :---: | :---: | :---: | :---: | | `DrLongformer` | NACHOS 7 GB | Further pretraining of [Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) | 4096 | [HuggingFace](https://huggingface.co/abazoge/DrLongformer) | | `DrBERT-4096` | NACHOS 7 GB | Conversion of [DrBERT-7B](https://huggingface.co/Dr-BERT/DrBERT-7GB) to the Longformer architecture | 4096 | [HuggingFace](https://huggingface.co/abazoge/DrBERT-4096) | | `DrLongformer-FS (from scratch)` | NACHOS 7 GB | Pretraining from scratch | 4096 | Not available | ### Model Usage You can use DrLongformer directly from [Hugging Face's Transformers](https://github.com/huggingface/transformers): ```python # !pip install transformers from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("abazoge/DrLongformer") model = AutoModelForMaskedLM.from_pretrained("abazoge/DrLongformer") ``` ### Citation ``` @misc{bazoge2024adaptation, title={Adaptation of Biomedical and Clinical Pretrained Models to French Long Documents: A Comparative Study}, author={Adrien Bazoge and Emmanuel Morin and Beatrice Daille and Pierre-Antoine Gourraud}, year={2024}, eprint={2402.16689}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "MEDICAL DATA" ]
BioNLP
# DrLongformer <span style="font-size:larger;">**DrLongformer**</span> is a French pretrained Longformer model based on Clinical-Longformer that was further pretrained on the NACHOS dataset (same dataset as [DrBERT](https://github.com/qanastek/DrBERT)). This model allows up to 4,096 tokens as input. DrLongformer consistently outperforms medical BERT-based models across most downstream tasks regardless of sequence length, except on NER tasks. Evaluated downstream tasks cover named entity recognition (NER), question answering (MCQA), Semantic textual similarity (STS) and text classification tasks (CLS). For more details, please refer to our paper: [Adaptation of Biomedical and Clinical Pretrained Models to French Long Documents: A Comparative Study](). ### Model pretraining We explored multiple strategies for the adaptation of Longformer models to the French medical domain: - Further pretraining of English clinical Longformer on French medical data. - Converting a French medical BERT model to the Longformer architecture. - Pretraining a Longformer from scratch on French medical data. All Pretraining scripts to reproduce the experiments are available in this Github repository: [DrLongformer](https://github.com/abazoge/DrLongformer). For the `from scratch` and `further pretraining` strategies, the training scripts are the same as [DrBERT](https://github.com/qanastek/DrBERT), only the bash scripts are different and available in this repository. All models were trained on the [Jean Zay](http://www.idris.fr/jean-zay/) French supercomputer. | Model name | Corpus | Pretraining strategy | Sequence Length | Model URL | | :------: | :---: | :---: | :---: | :---: | | `DrLongformer` | NACHOS 7 GB | Further pretraining of [Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) | 4096 | [HuggingFace](https://huggingface.co/abazoge/DrLongformer) | | `DrBERT-4096` | NACHOS 7 GB | Conversion of [DrBERT-7B](https://huggingface.co/Dr-BERT/DrBERT-7GB) to the Longformer architecture | 4096 | [HuggingFace](https://huggingface.co/abazoge/DrBERT-4096) | | `DrLongformer-FS (from scratch)` | NACHOS 7 GB | Pretraining from scratch | 4096 | Not available | ### Model Usage You can use DrLongformer directly from [Hugging Face's Transformers](https://github.com/huggingface/transformers): ```python # !pip install transformers from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("abazoge/DrLongformer") model = AutoModelForMaskedLM.from_pretrained("abazoge/DrLongformer") ``` ### Citation ``` @misc{bazoge2024adaptation, title={Adaptation of Biomedical and Clinical Pretrained Models to French Long Documents: A Comparative Study}, author={Adrien Bazoge and Emmanuel Morin and Beatrice Daille and Pierre-Antoine Gourraud}, year={2024}, eprint={2402.16689}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"datasets": ["Dr-BERT/NACHOS"], "language": ["fr"], "library_name": "transformers", "license": "apache-2.0", "tags": ["biomedical", "medical", "clinical", "life science"]}
dataset
null
530
FremyCompany/BioLORD-2023-S
FremyCompany
sentence-similarity
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "medical", "biology", "en", "dataset:FremyCompany/BioLORD-Dataset", "dataset:FremyCompany/AGCT-Dataset", "arxiv:2311.16075", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-12T19:00:33Z
2024-02-28T13:51:33+00:00
274
2
--- datasets: - FremyCompany/BioLORD-Dataset - FremyCompany/AGCT-Dataset language: en license: other license_name: ihtsdo-and-nlm-licences license_link: https://www.nlm.nih.gov/databases/umls.html pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - medical - biology widget: - source_sentence: bartonellosis sentences: - cat scratch disease - cat scratch wound - tick-borne orbivirus fever - cat fur --- # FremyCompany/BioLORD-2023-S This model was trained using BioLORD, a new pre-training strategy for producing meaningful representations for clinical sentences and biomedical concepts. State-of-the-art methodologies operate by maximizing the similarity in representation of names referring to the same concept, and preventing collapse through contrastive learning. However, because biomedical names are not always self-explanatory, it sometimes results in non-semantic representations. BioLORD overcomes this issue by grounding its concept representations using definitions, as well as short descriptions derived from a multi-relational knowledge graph consisting of biomedical ontologies. Thanks to this grounding, our model produces more semantic concept representations that match more closely the hierarchical structure of ontologies. BioLORD-2023 establishes a new state of the art for text similarity on both clinical sentences (MedSTS) and biomedical concepts (EHR-Rel-B). This model is based on [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) and was further finetuned on the [BioLORD-Dataset](https://huggingface.co/datasets/FremyCompany/BioLORD-Dataset) and LLM-generated definitions from the [Automatic Glossary of Clinical Terminology (AGCT)](https://huggingface.co/datasets/FremyCompany/AGCT-Dataset). ## Sibling models This model is accompanied by other models in the BioLORD-2023 series, which you might want to check: - [BioLORD-2023-M](https://huggingface.co/FremyCompany/BioLORD-2023-M) (multilingual model; distilled from BioLORD-2023) - [BioLORD-2023](https://huggingface.co/FremyCompany/BioLORD-2023) (best model after model averaging) - [BioLORD-2023-S](https://huggingface.co/FremyCompany/BioLORD-2023-S) (best hyperparameters; no model averaging; this model) - [BioLORD-2023-C](https://huggingface.co/FremyCompany/BioLORD-2023-C) (contrastive training only; for NEL tasks) You can also take a look at last year's model and paper: - [BioLORD-2022](https://huggingface.co/FremyCompany/BioLORD-STAMB2-v1) (also known as BioLORD-STAMB2-v1) ## Training strategy ### Summary of the 3 phases ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f04e8865d08220171a0ad3f/my94lNjxATRU_Rg5knUZ8.png) ### Contrastive phase: details ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f04e8865d08220171a0ad3f/_jE2ETcXkLvYLr7TeOdci.png) ### Self-distallation phase: details ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f04e8865d08220171a0ad3f/7xuqi231RB0OzvcxK3bf-.png) ## Citation This model accompanies the [BioLORD-2023: Learning Ontological Representations from Definitions](https://arxiv.org/abs/2311.16075) paper. When you use this model, please cite the original paper as follows: ```latex @article{remy-etal-2023-biolord, author = {Remy, François and Demuynck, Kris and Demeester, Thomas}, title = "{BioLORD-2023: semantic textual representations fusing large language models and clinical knowledge graph insights}", journal = {Journal of the American Medical Informatics Association}, pages = {ocae029}, year = {2024}, month = {02}, issn = {1527-974X}, doi = {10.1093/jamia/ocae029}, url = {https://doi.org/10.1093/jamia/ocae029}, eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocae029/56772025/ocae029.pdf}, } ``` ## Usage (Sentence-Transformers) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model has been finentuned for the biomedical domain. While it preserves a good ability to produce embeddings for general-purpose text, it will be more useful to you if you are trying to process medical documents such as EHR records or clinical notes. Both sentences and phrases can be embedded in the same latent space. Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"] model = SentenceTransformer('FremyCompany/BioLORD-2023-S') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('FremyCompany/BioLORD-2023-S') model = AutoModel.from_pretrained('FremyCompany/BioLORD-2023-S') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## License My own contributions for this model are covered by the MIT license. However, given the data used to train this model originates from UMLS and SnomedCT, you will need to ensure you have proper licensing of UMLS and SnomedCT before using this model. Both UMLS and SnomedCT are free of charge in most countries, but you might have to create an account and report on your usage of the data yearly to keep a valid license.
[ "EHR-REL" ]
BioNLP
# FremyCompany/BioLORD-2023-S This model was trained using BioLORD, a new pre-training strategy for producing meaningful representations for clinical sentences and biomedical concepts. State-of-the-art methodologies operate by maximizing the similarity in representation of names referring to the same concept, and preventing collapse through contrastive learning. However, because biomedical names are not always self-explanatory, it sometimes results in non-semantic representations. BioLORD overcomes this issue by grounding its concept representations using definitions, as well as short descriptions derived from a multi-relational knowledge graph consisting of biomedical ontologies. Thanks to this grounding, our model produces more semantic concept representations that match more closely the hierarchical structure of ontologies. BioLORD-2023 establishes a new state of the art for text similarity on both clinical sentences (MedSTS) and biomedical concepts (EHR-Rel-B). This model is based on [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) and was further finetuned on the [BioLORD-Dataset](https://huggingface.co/datasets/FremyCompany/BioLORD-Dataset) and LLM-generated definitions from the [Automatic Glossary of Clinical Terminology (AGCT)](https://huggingface.co/datasets/FremyCompany/AGCT-Dataset). ## Sibling models This model is accompanied by other models in the BioLORD-2023 series, which you might want to check: - [BioLORD-2023-M](https://huggingface.co/FremyCompany/BioLORD-2023-M) (multilingual model; distilled from BioLORD-2023) - [BioLORD-2023](https://huggingface.co/FremyCompany/BioLORD-2023) (best model after model averaging) - [BioLORD-2023-S](https://huggingface.co/FremyCompany/BioLORD-2023-S) (best hyperparameters; no model averaging; this model) - [BioLORD-2023-C](https://huggingface.co/FremyCompany/BioLORD-2023-C) (contrastive training only; for NEL tasks) You can also take a look at last year's model and paper: - [BioLORD-2022](https://huggingface.co/FremyCompany/BioLORD-STAMB2-v1) (also known as BioLORD-STAMB2-v1) ## Training strategy ### Summary of the 3 phases ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f04e8865d08220171a0ad3f/my94lNjxATRU_Rg5knUZ8.png) ### Contrastive phase: details ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f04e8865d08220171a0ad3f/_jE2ETcXkLvYLr7TeOdci.png) ### Self-distallation phase: details ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f04e8865d08220171a0ad3f/7xuqi231RB0OzvcxK3bf-.png) ## Citation This model accompanies the [BioLORD-2023: Learning Ontological Representations from Definitions](https://arxiv.org/abs/2311.16075) paper. When you use this model, please cite the original paper as follows: ```latex @article{remy-etal-2023-biolord, author = {Remy, François and Demuynck, Kris and Demeester, Thomas}, title = "{BioLORD-2023: semantic textual representations fusing large language models and clinical knowledge graph insights}", journal = {Journal of the American Medical Informatics Association}, pages = {ocae029}, year = {2024}, month = {02}, issn = {1527-974X}, doi = {10.1093/jamia/ocae029}, url = {https://doi.org/10.1093/jamia/ocae029}, eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocae029/56772025/ocae029.pdf}, } ``` ## Usage (Sentence-Transformers) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model has been finentuned for the biomedical domain. While it preserves a good ability to produce embeddings for general-purpose text, it will be more useful to you if you are trying to process medical documents such as EHR records or clinical notes. Both sentences and phrases can be embedded in the same latent space. Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"] model = SentenceTransformer('FremyCompany/BioLORD-2023-S') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('FremyCompany/BioLORD-2023-S') model = AutoModel.from_pretrained('FremyCompany/BioLORD-2023-S') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## License My own contributions for this model are covered by the MIT license. However, given the data used to train this model originates from UMLS and SnomedCT, you will need to ensure you have proper licensing of UMLS and SnomedCT before using this model. Both UMLS and SnomedCT are free of charge in most countries, but you might have to create an account and report on your usage of the data yearly to keep a valid license.
{"datasets": ["FremyCompany/BioLORD-Dataset", "FremyCompany/AGCT-Dataset"], "language": "en", "license": "other", "license_name": "ihtsdo-and-nlm-licences", "license_link": "https://www.nlm.nih.gov/databases/umls.html", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "medical", "biology"], "widget": [{"source_sentence": "bartonellosis", "sentences": ["cat scratch disease", "cat scratch wound", "tick-borne orbivirus fever", "cat fur"]}]}
dataset
null
531
markaw/NV-Embed-v2
markaw
feature-extraction
[ "transformers", "safetensors", "nvembed", "feature-extraction", "mteb", "custom_code", "en", "arxiv:2405.17428", "arxiv:2407.15831", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
2024-09-24T20:33:50Z
2024-09-25T08:49:20+00:00
15
0
--- language: - en library_name: transformers license: cc-by-nc-4.0 tags: - mteb model-index: - name: NV-Embed-v2 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 94.28358208955224 - type: accuracy_stderr value: 0.40076780842082305 - type: ap value: 76.49097318319616 - type: ap_stderr value: 1.2418692675183929 - type: f1 value: 91.41982003001168 - type: f1_stderr value: 0.5043921413093579 - type: main_score value: 94.28358208955224 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.74185000000001 - type: accuracy_stderr value: 0.07420471683120942 - type: ap value: 96.4737144875525 - type: ap_stderr value: 0.2977518241541558 - type: f1 value: 97.7417581594921 - type: f1_stderr value: 0.07428763617010377 - type: main_score value: 97.74185000000001 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 63.96000000000001 - type: accuracy_stderr value: 1.815555011559825 - type: f1 value: 62.49361841640459 - type: f1_stderr value: 2.829339314126457 - type: main_score value: 63.96000000000001 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 46.515 - type: map_at_10 value: 62.392 - type: map_at_100 value: 62.732 - type: map_at_1000 value: 62.733000000000004 - type: map_at_3 value: 58.701 - type: map_at_5 value: 61.027 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 46.515 - type: ndcg_at_10 value: 70.074 - type: ndcg_at_100 value: 71.395 - type: ndcg_at_1000 value: 71.405 - type: ndcg_at_3 value: 62.643 - type: ndcg_at_5 value: 66.803 - type: precision_at_1 value: 46.515 - type: precision_at_10 value: 9.41 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 24.68 - type: precision_at_5 value: 16.814 - type: recall_at_1 value: 46.515 - type: recall_at_10 value: 94.097 - type: recall_at_100 value: 99.57300000000001 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 74.03999999999999 - type: recall_at_5 value: 84.068 - type: main_score value: 70.074 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: main_score value: 55.79933795955242 - type: v_measure value: 55.79933795955242 - type: v_measure_std value: 14.575108141916148 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 51.262845995850334 - type: v_measure value: 51.262845995850334 - type: v_measure_std value: 14.727824473104173 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.46477327480808 - type: mrr value: 79.50160488941653 - type: main_score value: 67.46477327480808 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 89.74311007980987 - type: cosine_spearman value: 87.41644967443246 - type: manhattan_pearson value: 88.57457108347744 - type: manhattan_spearman value: 87.59295972042997 - type: euclidean_pearson value: 88.27108977118459 - type: euclidean_spearman value: 87.41644967443246 - type: main_score value: 87.41644967443246 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 92.41558441558443 - type: accuracy_stderr value: 0.37701502251934443 - type: f1 value: 92.38130170447671 - type: f1_stderr value: 0.39115151225617767 - type: main_score value: 92.41558441558443 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: main_score value: 54.08649516394218 - type: v_measure value: 54.08649516394218 - type: v_measure_std value: 0.5303233693045373 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 49.60352214167779 - type: v_measure value: 49.60352214167779 - type: v_measure_std value: 0.7176198612516721 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: CQADupstackRetrieval_is_a_combined_dataset config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 31.913249999999998 - type: map_at_10 value: 43.87733333333334 - type: map_at_100 value: 45.249916666666664 - type: map_at_1000 value: 45.350583333333326 - type: map_at_3 value: 40.316833333333335 - type: map_at_5 value: 42.317083333333336 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 38.30616666666667 - type: ndcg_at_10 value: 50.24175000000001 - type: ndcg_at_100 value: 55.345333333333336 - type: ndcg_at_1000 value: 56.91225000000001 - type: ndcg_at_3 value: 44.67558333333333 - type: ndcg_at_5 value: 47.32333333333334 - type: precision_at_1 value: 38.30616666666667 - type: precision_at_10 value: 9.007416666666666 - type: precision_at_100 value: 1.3633333333333333 - type: precision_at_1000 value: 0.16691666666666666 - type: precision_at_3 value: 20.895666666666667 - type: precision_at_5 value: 14.871666666666666 - type: recall_at_1 value: 31.913249999999998 - type: recall_at_10 value: 64.11891666666666 - type: recall_at_100 value: 85.91133333333333 - type: recall_at_1000 value: 96.28225 - type: recall_at_3 value: 48.54749999999999 - type: recall_at_5 value: 55.44283333333334 - type: main_score value: 50.24175000000001 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 19.556 - type: map_at_10 value: 34.623 - type: map_at_100 value: 36.97 - type: map_at_1000 value: 37.123 - type: map_at_3 value: 28.904999999999998 - type: map_at_5 value: 31.955 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 44.104 - type: ndcg_at_10 value: 45.388 - type: ndcg_at_100 value: 52.793 - type: ndcg_at_1000 value: 55.108999999999995 - type: ndcg_at_3 value: 38.604 - type: ndcg_at_5 value: 40.806 - type: precision_at_1 value: 44.104 - type: precision_at_10 value: 14.143 - type: precision_at_100 value: 2.2190000000000003 - type: precision_at_1000 value: 0.266 - type: precision_at_3 value: 29.316 - type: precision_at_5 value: 21.98 - type: recall_at_1 value: 19.556 - type: recall_at_10 value: 52.120999999999995 - type: recall_at_100 value: 76.509 - type: recall_at_1000 value: 89.029 - type: recall_at_3 value: 34.919 - type: recall_at_5 value: 42.18 - type: main_score value: 45.388 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 10.714 - type: map_at_10 value: 25.814999999999998 - type: map_at_100 value: 37.845 - type: map_at_1000 value: 39.974 - type: map_at_3 value: 17.201 - type: map_at_5 value: 21.062 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 66.0 - type: ndcg_at_10 value: 53.496 - type: ndcg_at_100 value: 58.053 - type: ndcg_at_1000 value: 64.886 - type: ndcg_at_3 value: 57.656 - type: ndcg_at_5 value: 55.900000000000006 - type: precision_at_1 value: 77.25 - type: precision_at_10 value: 43.65 - type: precision_at_100 value: 13.76 - type: precision_at_1000 value: 2.5940000000000003 - type: precision_at_3 value: 61.0 - type: precision_at_5 value: 54.65 - type: recall_at_1 value: 10.714 - type: recall_at_10 value: 31.173000000000002 - type: recall_at_100 value: 63.404 - type: recall_at_1000 value: 85.874 - type: recall_at_3 value: 18.249000000000002 - type: recall_at_5 value: 23.69 - type: main_score value: 53.496 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 93.38499999999999 - type: accuracy_stderr value: 0.13793114224133846 - type: f1 value: 90.12141028353496 - type: f1_stderr value: 0.174640257706043 - type: main_score value: 93.38499999999999 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 84.66900000000001 - type: map_at_10 value: 91.52799999999999 - type: map_at_100 value: 91.721 - type: map_at_1000 value: 91.73 - type: map_at_3 value: 90.752 - type: map_at_5 value: 91.262 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 91.20899999999999 - type: ndcg_at_10 value: 93.74900000000001 - type: ndcg_at_100 value: 94.279 - type: ndcg_at_1000 value: 94.408 - type: ndcg_at_3 value: 92.923 - type: ndcg_at_5 value: 93.376 - type: precision_at_1 value: 91.20899999999999 - type: precision_at_10 value: 11.059 - type: precision_at_100 value: 1.1560000000000001 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 35.129 - type: precision_at_5 value: 21.617 - type: recall_at_1 value: 84.66900000000001 - type: recall_at_10 value: 97.03399999999999 - type: recall_at_100 value: 98.931 - type: recall_at_1000 value: 99.65899999999999 - type: recall_at_3 value: 94.76299999999999 - type: recall_at_5 value: 95.968 - type: main_score value: 93.74900000000001 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 34.866 - type: map_at_10 value: 58.06099999999999 - type: map_at_100 value: 60.028999999999996 - type: map_at_1000 value: 60.119 - type: map_at_3 value: 51.304 - type: map_at_5 value: 55.054 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 64.815 - type: ndcg_at_10 value: 65.729 - type: ndcg_at_100 value: 71.14 - type: ndcg_at_1000 value: 72.336 - type: ndcg_at_3 value: 61.973 - type: ndcg_at_5 value: 62.858000000000004 - type: precision_at_1 value: 64.815 - type: precision_at_10 value: 17.87 - type: precision_at_100 value: 2.373 - type: precision_at_1000 value: 0.258 - type: precision_at_3 value: 41.152 - type: precision_at_5 value: 29.568 - type: recall_at_1 value: 34.866 - type: recall_at_10 value: 72.239 - type: recall_at_100 value: 91.19 - type: recall_at_1000 value: 98.154 - type: recall_at_3 value: 56.472 - type: recall_at_5 value: 63.157 - type: main_score value: 65.729 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 44.651999999999994 - type: map_at_10 value: 79.95100000000001 - type: map_at_100 value: 80.51700000000001 - type: map_at_1000 value: 80.542 - type: map_at_3 value: 77.008 - type: map_at_5 value: 78.935 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 89.305 - type: ndcg_at_10 value: 85.479 - type: ndcg_at_100 value: 87.235 - type: ndcg_at_1000 value: 87.669 - type: ndcg_at_3 value: 81.648 - type: ndcg_at_5 value: 83.88600000000001 - type: precision_at_1 value: 89.305 - type: precision_at_10 value: 17.807000000000002 - type: precision_at_100 value: 1.9140000000000001 - type: precision_at_1000 value: 0.197 - type: precision_at_3 value: 53.756 - type: precision_at_5 value: 34.018 - type: recall_at_1 value: 44.651999999999994 - type: recall_at_10 value: 89.034 - type: recall_at_100 value: 95.719 - type: recall_at_1000 value: 98.535 - type: recall_at_3 value: 80.635 - type: recall_at_5 value: 85.044 - type: main_score value: 85.479 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 97.1376 - type: accuracy_stderr value: 0.04571914259913447 - type: ap value: 95.92783808558808 - type: ap_stderr value: 0.05063782483358255 - type: f1 value: 97.13755519177172 - type: f1_stderr value: 0.04575943074086138 - type: main_score value: 97.1376 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 0.0 - type: map_at_10 value: 38.342 - type: map_at_100 value: 0.0 - type: map_at_1000 value: 0.0 - type: map_at_3 value: 0.0 - type: map_at_5 value: 0.0 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 0.0 - type: ndcg_at_10 value: 45.629999999999995 - type: ndcg_at_100 value: 0.0 - type: ndcg_at_1000 value: 0.0 - type: ndcg_at_3 value: 0.0 - type: ndcg_at_5 value: 0.0 - type: precision_at_1 value: 0.0 - type: precision_at_10 value: 7.119000000000001 - type: precision_at_100 value: 0.0 - type: precision_at_1000 value: 0.0 - type: precision_at_3 value: 0.0 - type: precision_at_5 value: 0.0 - type: recall_at_1 value: 0.0 - type: recall_at_10 value: 67.972 - type: recall_at_100 value: 0.0 - type: recall_at_1000 value: 0.0 - type: recall_at_3 value: 0.0 - type: recall_at_5 value: 0.0 - type: main_score value: 45.629999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.24988600091199 - type: accuracy_stderr value: 0.04496826931900734 - type: f1 value: 99.15933275095276 - type: f1_stderr value: 0.05565039139747446 - type: main_score value: 99.24988600091199 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 94.3684450524396 - type: accuracy_stderr value: 0.8436548701322188 - type: f1 value: 77.33022623133307 - type: f1_stderr value: 0.9228425861187275 - type: main_score value: 94.3684450524396 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 86.09616677874916 - type: accuracy_stderr value: 0.9943208055590853 - type: f1 value: 83.4902056490062 - type: f1_stderr value: 0.7626189310074184 - type: main_score value: 86.09616677874916 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 92.17215870880968 - type: accuracy_stderr value: 0.25949941333658166 - type: f1 value: 91.36757392422702 - type: f1_stderr value: 0.29139507298154815 - type: main_score value: 92.17215870880968 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: main_score value: 46.09497344077905 - type: v_measure value: 46.09497344077905 - type: v_measure_std value: 1.44871520869784 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 44.861049989560684 - type: v_measure value: 44.861049989560684 - type: v_measure_std value: 1.432199293162203 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.75936162919999 - type: mrr value: 32.966812736541236 - type: main_score value: 31.75936162919999 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 7.893999999999999 - type: map_at_10 value: 17.95 - type: map_at_100 value: 23.474 - type: map_at_1000 value: 25.412000000000003 - type: map_at_3 value: 12.884 - type: map_at_5 value: 15.171000000000001 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 55.728 - type: ndcg_at_10 value: 45.174 - type: ndcg_at_100 value: 42.18 - type: ndcg_at_1000 value: 50.793 - type: ndcg_at_3 value: 50.322 - type: ndcg_at_5 value: 48.244 - type: precision_at_1 value: 57.276 - type: precision_at_10 value: 33.437 - type: precision_at_100 value: 10.671999999999999 - type: precision_at_1000 value: 2.407 - type: precision_at_3 value: 46.646 - type: precision_at_5 value: 41.672 - type: recall_at_1 value: 7.893999999999999 - type: recall_at_10 value: 22.831000000000003 - type: recall_at_100 value: 43.818 - type: recall_at_1000 value: 75.009 - type: recall_at_3 value: 14.371 - type: recall_at_5 value: 17.752000000000002 - type: main_score value: 45.174 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 49.351 - type: map_at_10 value: 66.682 - type: map_at_100 value: 67.179 - type: map_at_1000 value: 67.18499999999999 - type: map_at_3 value: 62.958999999999996 - type: map_at_5 value: 65.364 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 55.417 - type: ndcg_at_10 value: 73.568 - type: ndcg_at_100 value: 75.35 - type: ndcg_at_1000 value: 75.478 - type: ndcg_at_3 value: 67.201 - type: ndcg_at_5 value: 70.896 - type: precision_at_1 value: 55.417 - type: precision_at_10 value: 11.036999999999999 - type: precision_at_100 value: 1.204 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 29.654000000000003 - type: precision_at_5 value: 20.006 - type: recall_at_1 value: 49.351 - type: recall_at_10 value: 91.667 - type: recall_at_100 value: 98.89 - type: recall_at_1000 value: 99.812 - type: recall_at_3 value: 75.715 - type: recall_at_5 value: 84.072 - type: main_score value: 73.568 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: map_at_1 value: 71.358 - type: map_at_10 value: 85.474 - type: map_at_100 value: 86.101 - type: map_at_1000 value: 86.114 - type: map_at_3 value: 82.562 - type: map_at_5 value: 84.396 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 82.12 - type: ndcg_at_10 value: 89.035 - type: ndcg_at_100 value: 90.17399999999999 - type: ndcg_at_1000 value: 90.243 - type: ndcg_at_3 value: 86.32300000000001 - type: ndcg_at_5 value: 87.85 - type: precision_at_1 value: 82.12 - type: precision_at_10 value: 13.55 - type: precision_at_100 value: 1.54 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.89 - type: precision_at_5 value: 24.9 - type: recall_at_1 value: 71.358 - type: recall_at_10 value: 95.855 - type: recall_at_100 value: 99.711 - type: recall_at_1000 value: 99.994 - type: recall_at_3 value: 88.02 - type: recall_at_5 value: 92.378 - type: main_score value: 89.035 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 71.0984522742521 - type: v_measure value: 71.0984522742521 - type: v_measure_std value: 3.5668139917058044 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: main_score value: 74.94499641904133 - type: v_measure value: 74.94499641904133 - type: v_measure_std value: 11.419672879389248 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: map_at_1 value: 5.343 - type: map_at_10 value: 13.044 - type: map_at_100 value: 15.290999999999999 - type: map_at_1000 value: 15.609 - type: map_at_3 value: 9.227 - type: map_at_5 value: 11.158 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 26.3 - type: ndcg_at_10 value: 21.901 - type: ndcg_at_100 value: 30.316 - type: ndcg_at_1000 value: 35.547000000000004 - type: ndcg_at_3 value: 20.560000000000002 - type: ndcg_at_5 value: 18.187 - type: precision_at_1 value: 26.3 - type: precision_at_10 value: 11.34 - type: precision_at_100 value: 2.344 - type: precision_at_1000 value: 0.359 - type: precision_at_3 value: 18.967 - type: precision_at_5 value: 15.920000000000002 - type: recall_at_1 value: 5.343 - type: recall_at_10 value: 22.997 - type: recall_at_100 value: 47.562 - type: recall_at_1000 value: 72.94500000000001 - type: recall_at_3 value: 11.533 - type: recall_at_5 value: 16.148 - type: main_score value: 21.901 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 87.3054603493591 - type: cosine_spearman value: 82.14763206055602 - type: manhattan_pearson value: 84.78737790237557 - type: manhattan_spearman value: 81.88455356002758 - type: euclidean_pearson value: 85.00668629311117 - type: euclidean_spearman value: 82.14763037860851 - type: main_score value: 82.14763206055602 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 86.6911864687294 - type: cosine_spearman value: 77.89286260403269 - type: manhattan_pearson value: 82.87240347680857 - type: manhattan_spearman value: 78.10055393740326 - type: euclidean_pearson value: 82.72282535777123 - type: euclidean_spearman value: 77.89256648406325 - type: main_score value: 77.89286260403269 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 87.7220832598633 - type: cosine_spearman value: 88.30238972017452 - type: manhattan_pearson value: 87.88214789140248 - type: manhattan_spearman value: 88.24770220032391 - type: euclidean_pearson value: 87.98610386257103 - type: euclidean_spearman value: 88.30238972017452 - type: main_score value: 88.30238972017452 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 85.70614623247714 - type: cosine_spearman value: 84.29920990970672 - type: manhattan_pearson value: 84.9836190531721 - type: manhattan_spearman value: 84.40933470597638 - type: euclidean_pearson value: 84.96652336693347 - type: euclidean_spearman value: 84.29920989531965 - type: main_score value: 84.29920990970672 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 88.4169972425264 - type: cosine_spearman value: 89.03555007807218 - type: manhattan_pearson value: 88.83068699455478 - type: manhattan_spearman value: 89.21877175674125 - type: euclidean_pearson value: 88.7251052947544 - type: euclidean_spearman value: 89.03557389893083 - type: main_score value: 89.03555007807218 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 85.63830579034632 - type: cosine_spearman value: 86.77353371581373 - type: manhattan_pearson value: 86.24830492396637 - type: manhattan_spearman value: 86.96754348626189 - type: euclidean_pearson value: 86.09837038778359 - type: euclidean_spearman value: 86.77353371581373 - type: main_score value: 86.77353371581373 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cosine_pearson value: 91.2204675588959 - type: cosine_spearman value: 90.66976712249057 - type: manhattan_pearson value: 91.11007808242346 - type: manhattan_spearman value: 90.51739232964488 - type: euclidean_pearson value: 91.19588941007903 - type: euclidean_spearman value: 90.66976712249057 - type: main_score value: 90.66976712249057 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cosine_pearson value: 69.34416749707114 - type: cosine_spearman value: 68.11632448161046 - type: manhattan_pearson value: 68.99243488935281 - type: manhattan_spearman value: 67.8398546438258 - type: euclidean_pearson value: 69.06376010216088 - type: euclidean_spearman value: 68.11632448161046 - type: main_score value: 68.11632448161046 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 88.10309739429758 - type: cosine_spearman value: 88.40520383147418 - type: manhattan_pearson value: 88.50753383813232 - type: manhattan_spearman value: 88.66382629460927 - type: euclidean_pearson value: 88.35050664609376 - type: euclidean_spearman value: 88.40520383147418 - type: main_score value: 88.40520383147418 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.58627126942797 - type: mrr value: 97.01098103058887 - type: main_score value: 87.58627126942797 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 62.883 - type: map_at_10 value: 75.371 - type: map_at_100 value: 75.66000000000001 - type: map_at_1000 value: 75.667 - type: map_at_3 value: 72.741 - type: map_at_5 value: 74.74 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 66.0 - type: ndcg_at_10 value: 80.12700000000001 - type: ndcg_at_100 value: 81.291 - type: ndcg_at_1000 value: 81.464 - type: ndcg_at_3 value: 76.19 - type: ndcg_at_5 value: 78.827 - type: precision_at_1 value: 66.0 - type: precision_at_10 value: 10.567 - type: precision_at_100 value: 1.117 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 30.333 - type: precision_at_5 value: 20.133000000000003 - type: recall_at_1 value: 62.883 - type: recall_at_10 value: 93.556 - type: recall_at_100 value: 98.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 83.322 - type: recall_at_5 value: 89.756 - type: main_score value: 80.12700000000001 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.87524752475248 - type: cos_sim_accuracy_threshold value: 74.86587762832642 - type: cos_sim_ap value: 97.02222446606328 - type: cos_sim_f1 value: 93.66197183098592 - type: cos_sim_f1_threshold value: 74.74223375320435 - type: cos_sim_precision value: 94.23076923076923 - type: cos_sim_recall value: 93.10000000000001 - type: dot_accuracy value: 99.87524752475248 - type: dot_accuracy_threshold value: 74.86587762832642 - type: dot_ap value: 97.02222688043362 - type: dot_f1 value: 93.66197183098592 - type: dot_f1_threshold value: 74.74223375320435 - type: dot_precision value: 94.23076923076923 - type: dot_recall value: 93.10000000000001 - type: euclidean_accuracy value: 99.87524752475248 - type: euclidean_accuracy_threshold value: 70.9000825881958 - type: euclidean_ap value: 97.02222446606329 - type: euclidean_f1 value: 93.66197183098592 - type: euclidean_f1_threshold value: 71.07426524162292 - type: euclidean_precision value: 94.23076923076923 - type: euclidean_recall value: 93.10000000000001 - type: manhattan_accuracy value: 99.87623762376238 - type: manhattan_accuracy_threshold value: 3588.5040283203125 - type: manhattan_ap value: 97.09194643777883 - type: manhattan_f1 value: 93.7375745526839 - type: manhattan_f1_threshold value: 3664.3760681152344 - type: manhattan_precision value: 93.18181818181817 - type: manhattan_recall value: 94.3 - type: max_accuracy value: 99.87623762376238 - type: max_ap value: 97.09194643777883 - type: max_f1 value: 93.7375745526839 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 82.10134099988541 - type: v_measure value: 82.10134099988541 - type: v_measure_std value: 2.7926349897769533 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: main_score value: 48.357450742397404 - type: v_measure value: 48.357450742397404 - type: v_measure_std value: 1.520118876440547 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.79277200802986 - type: mrr value: 56.742517082590616 - type: main_score value: 55.79277200802986 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_spearman value: 30.701215774712693 - type: cosine_pearson value: 31.26740037278488 - type: dot_spearman value: 30.701215774712693 - type: dot_pearson value: 31.267404144879997 - type: main_score value: 30.701215774712693 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: map_at_1 value: 0.23800000000000002 - type: map_at_10 value: 2.31 - type: map_at_100 value: 15.495000000000001 - type: map_at_1000 value: 38.829 - type: map_at_3 value: 0.72 - type: map_at_5 value: 1.185 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 91.0 - type: ndcg_at_10 value: 88.442 - type: ndcg_at_100 value: 71.39 - type: ndcg_at_1000 value: 64.153 - type: ndcg_at_3 value: 89.877 - type: ndcg_at_5 value: 89.562 - type: precision_at_1 value: 92.0 - type: precision_at_10 value: 92.60000000000001 - type: precision_at_100 value: 73.74000000000001 - type: precision_at_1000 value: 28.222 - type: precision_at_3 value: 94.0 - type: precision_at_5 value: 93.60000000000001 - type: recall_at_1 value: 0.23800000000000002 - type: recall_at_10 value: 2.428 - type: recall_at_100 value: 18.099999999999998 - type: recall_at_1000 value: 60.79599999999999 - type: recall_at_3 value: 0.749 - type: recall_at_5 value: 1.238 - type: main_score value: 88.442 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 3.4939999999999998 - type: map_at_10 value: 12.531999999999998 - type: map_at_100 value: 19.147 - type: map_at_1000 value: 20.861 - type: map_at_3 value: 7.558 - type: map_at_5 value: 9.49 - type: mrr_at_1 value: 0.0 - type: mrr_at_10 value: 0.0 - type: mrr_at_100 value: 0.0 - type: mrr_at_1000 value: 0.0 - type: mrr_at_3 value: 0.0 - type: mrr_at_5 value: 0.0 - type: ndcg_at_1 value: 47.959 - type: ndcg_at_10 value: 31.781 - type: ndcg_at_100 value: 42.131 - type: ndcg_at_1000 value: 53.493 - type: ndcg_at_3 value: 39.204 - type: ndcg_at_5 value: 34.635 - type: precision_at_1 value: 48.980000000000004 - type: precision_at_10 value: 27.143 - type: precision_at_100 value: 8.224 - type: precision_at_1000 value: 1.584 - type: precision_at_3 value: 38.775999999999996 - type: precision_at_5 value: 33.061 - type: recall_at_1 value: 3.4939999999999998 - type: recall_at_10 value: 18.895 - type: recall_at_100 value: 50.192 - type: recall_at_1000 value: 85.167 - type: recall_at_3 value: 8.703 - type: recall_at_5 value: 11.824 - type: main_score value: 31.781 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 92.7402 - type: accuracy_stderr value: 1.020764595781027 - type: ap value: 44.38594756333084 - type: ap_stderr value: 1.817150701258273 - type: f1 value: 79.95699280019547 - type: f1_stderr value: 1.334582498702029 - type: main_score value: 92.7402 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 80.86870401810978 - type: accuracy_stderr value: 0.22688467782004712 - type: f1 value: 81.1829040745744 - type: f1_stderr value: 0.19774920574849694 - type: main_score value: 80.86870401810978 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 64.82048869927482 - type: v_measure value: 64.82048869927482 - type: v_measure_std value: 0.9170394252450564 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.44251057996067 - type: cos_sim_accuracy_threshold value: 70.2150285243988 - type: cos_sim_ap value: 81.11422351199913 - type: cos_sim_f1 value: 73.71062868615887 - type: cos_sim_f1_threshold value: 66.507488489151 - type: cos_sim_precision value: 70.2799712849964 - type: cos_sim_recall value: 77.4934036939314 - type: dot_accuracy value: 88.44251057996067 - type: dot_accuracy_threshold value: 70.2150285243988 - type: dot_ap value: 81.11420529068658 - type: dot_f1 value: 73.71062868615887 - type: dot_f1_threshold value: 66.50749444961548 - type: dot_precision value: 70.2799712849964 - type: dot_recall value: 77.4934036939314 - type: euclidean_accuracy value: 88.44251057996067 - type: euclidean_accuracy_threshold value: 77.18156576156616 - type: euclidean_ap value: 81.11422421732487 - type: euclidean_f1 value: 73.71062868615887 - type: euclidean_f1_threshold value: 81.84436559677124 - type: euclidean_precision value: 70.2799712849964 - type: euclidean_recall value: 77.4934036939314 - type: manhattan_accuracy value: 88.26369434344639 - type: manhattan_accuracy_threshold value: 3837.067413330078 - type: manhattan_ap value: 80.81442360477725 - type: manhattan_f1 value: 73.39883099117024 - type: manhattan_f1_threshold value: 4098.833847045898 - type: manhattan_precision value: 69.41896024464832 - type: manhattan_recall value: 77.86279683377309 - type: max_accuracy value: 88.44251057996067 - type: max_ap value: 81.11422421732487 - type: max_f1 value: 73.71062868615887 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 90.03182365040556 - type: cos_sim_accuracy_threshold value: 64.46443796157837 - type: cos_sim_ap value: 87.86649113691112 - type: cos_sim_f1 value: 80.45644844577821 - type: cos_sim_f1_threshold value: 61.40774488449097 - type: cos_sim_precision value: 77.54052702992216 - type: cos_sim_recall value: 83.60024638127503 - type: dot_accuracy value: 90.03182365040556 - type: dot_accuracy_threshold value: 64.46444988250732 - type: dot_ap value: 87.86649011954319 - type: dot_f1 value: 80.45644844577821 - type: dot_f1_threshold value: 61.407750844955444 - type: dot_precision value: 77.54052702992216 - type: dot_recall value: 83.60024638127503 - type: euclidean_accuracy value: 90.03182365040556 - type: euclidean_accuracy_threshold value: 84.30368900299072 - type: euclidean_ap value: 87.86649114275045 - type: euclidean_f1 value: 80.45644844577821 - type: euclidean_f1_threshold value: 87.8547191619873 - type: euclidean_precision value: 77.54052702992216 - type: euclidean_recall value: 83.60024638127503 - type: manhattan_accuracy value: 89.99883572010712 - type: manhattan_accuracy_threshold value: 4206.838607788086 - type: manhattan_ap value: 87.8600826607838 - type: manhattan_f1 value: 80.44054508120217 - type: manhattan_f1_threshold value: 4372.755432128906 - type: manhattan_precision value: 78.08219178082192 - type: manhattan_recall value: 82.94579611949491 - type: max_accuracy value: 90.03182365040556 - type: max_ap value: 87.86649114275045 - type: max_f1 value: 80.45644844577821 --- ## Introduction We present NV-Embed-v2, a generalist embedding model that ranks No. 1 on the Massive Text Embedding Benchmark ([MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard))(as of Aug 30, 2024) with a score of 72.31 across 56 text embedding tasks. It also holds the No. 1 in the retrieval sub-category (a score of 62.65 across 15 tasks) in the leaderboard, which is essential to the development of RAG technology. NV-Embed-v2 presents several new designs, including having the LLM attend to latent vectors for better pooled embedding output, and demonstrating a two-staged instruction tuning method to enhance the accuracy of both retrieval and non-retrieval tasks. Additionally, NV-Embed-v2 incorporates a novel hard-negative mining methods that take into account the positive relevance score for better false negatives removal. For more technical details, refer to our paper: [NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models](https://arxiv.org/pdf/2405.17428). ## Model Details - Base Decoder-only LLM: [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Pooling Type: Latent-Attention - Embedding Dimension: 4096 ## How to use Here is an example of how to encode queries and passages using Huggingface-transformer and Sentence-transformer. Please find the required package version [here](https://huggingface.co/nvidia/NV-Embed-v2#2-required-packages). ### Usage (HuggingFace Transformers) ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel # Each query needs to be accompanied by an corresponding instruction describing the task. task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",} query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: " queries = [ 'are judo throws allowed in wrestling?', 'how to become a radiology technician in michigan?' ] # No instruction needed for retrieval passages passage_prefix = "" passages = [ "Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.", "Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan." ] # load model with tokenizer model = AutoModel.from_pretrained('nvidia/NV-Embed-v2', trust_remote_code=True) # get the embeddings max_length = 32768 query_embeddings = model.encode(queries, instruction=query_prefix, max_length=max_length) passage_embeddings = model.encode(passages, instruction=passage_prefix, max_length=max_length) # normalize embeddings query_embeddings = F.normalize(query_embeddings, p=2, dim=1) passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1) # get the embeddings with DataLoader (spliting the datasets into multiple mini-batches) # batch_size=2 # query_embeddings = model._do_encode(queries, batch_size=batch_size, instruction=query_prefix, max_length=max_length, num_workers=32, return_numpy=True) # passage_embeddings = model._do_encode(passages, batch_size=batch_size, instruction=passage_prefix, max_length=max_length, num_workers=32, return_numpy=True) scores = (query_embeddings @ passage_embeddings.T) * 100 print(scores.tolist()) # [[87.42693328857422, 0.46283677220344543], [0.965264618396759, 86.03721618652344]] ``` ### Usage (Sentence-Transformers) ```python import torch from sentence_transformers import SentenceTransformer # Each query needs to be accompanied by an corresponding instruction describing the task. task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",} query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: " queries = [ 'are judo throws allowed in wrestling?', 'how to become a radiology technician in michigan?' ] # No instruction needed for retrieval passages passages = [ "Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.", "Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan." ] # load model with tokenizer model = SentenceTransformer('nvidia/NV-Embed-v2', trust_remote_code=True) model.max_seq_length = 32768 model.tokenizer.padding_side="right" def add_eos(input_examples): input_examples = [input_example + model.tokenizer.eos_token for input_example in input_examples] return input_examples # get the embeddings batch_size = 2 query_embeddings = model.encode(add_eos(queries), batch_size=batch_size, prompt=query_prefix, normalize_embeddings=True) passage_embeddings = model.encode(add_eos(passages), batch_size=batch_size, normalize_embeddings=True) scores = (query_embeddings @ passage_embeddings.T) * 100 print(scores.tolist()) ``` ## License This model should not be used for any commercial purpose. Refer the [license](https://spdx.org/licenses/CC-BY-NC-4.0) for the detailed terms. For commercial purpose, we recommend you to use the models of [NeMo Retriever Microservices (NIMs)](https://build.nvidia.com/explore/retrieval). ## Correspondence to Chankyu Lee ([email protected]), Wei Ping ([email protected]) ## Citation If you find this code useful in your research, please consider citing: ```bibtex @article{lee2024nv, title={NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models}, author={Lee, Chankyu and Roy, Rajarshi and Xu, Mengyao and Raiman, Jonathan and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei}, journal={arXiv preprint arXiv:2405.17428}, year={2024} } ``` ```bibtex @article{moreira2024nv, title={NV-Retriever: Improving text embedding models with effective hard-negative mining}, author={Moreira, Gabriel de Souza P and Osmulski, Radek and Xu, Mengyao and Ak, Ronay and Schifferer, Benedikt and Oldridge, Even}, journal={arXiv preprint arXiv:2407.15831}, year={2024} } ``` ## Troubleshooting #### 1. Instruction template for MTEB benchmarks For MTEB sub-tasks for retrieval, STS, summarization, please use the instruction prefix template in [instructions.json](https://huggingface.co/nvidia/NV-Embed-v2/blob/main/instructions.json). For classification, clustering and reranking, please use the instructions provided in Table. 7 in [NV-Embed paper](https://arxiv.org/pdf/2405.17428). #### 2. Required Packages If you have trouble, try installing the python packages as below ```python pip uninstall -y transformer-engine pip install torch==2.2.0 pip install transformers==4.42.4 pip install flash-attn==2.2.0 pip install sentence-transformers==2.7.0 ``` #### 3. How to enable Multi-GPU (Note, this is the case for HuggingFace Transformers) ```python from transformers import AutoModel from torch.nn import DataParallel embedding_model = AutoModel.from_pretrained("nvidia/NV-Embed-v2") for module_key, module in embedding_model._modules.items(): embedding_model._modules[module_key] = DataParallel(module) ``` #### 4. Fixing "nvidia/NV-Embed-v2 is not the path to a directory containing a file named config.json" Switch to your local model path,and open config.json and change the value of **"_name_or_path"** and replace it with your local model path. #### 5. Access to model nvidia/NV-Embed-v2 is restricted. You must be authenticated to access it Use your huggingface access [token](https://huggingface.co/settings/tokens) to execute *"huggingface-cli login"*. #### 6. How to resolve slight mismatch in Sentence transformer results. A slight mismatch in the Sentence Transformer implementation is caused by a discrepancy in the calculation of the instruction prefix length within the Sentence Transformer package. To fix this issue, you need to build the Sentence Transformer package from source, making the necessary modification in this [line](https://github.com/UKPLab/sentence-transformers/blob/v2.7-release/sentence_transformers/SentenceTransformer.py#L353) as below. ```python git clone https://github.com/UKPLab/sentence-transformers.git cd sentence-transformers git checkout v2.7-release # Modify L353 in SentenceTransformer.py to **'extra_features["prompt_length"] = tokenized_prompt["input_ids"].shape[-1]'**. pip install -e . ```
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
## Introduction We present NV-Embed-v2, a generalist embedding model that ranks No. 1 on the Massive Text Embedding Benchmark ([MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard))(as of Aug 30, 2024) with a score of 72.31 across 56 text embedding tasks. It also holds the No. 1 in the retrieval sub-category (a score of 62.65 across 15 tasks) in the leaderboard, which is essential to the development of RAG technology. NV-Embed-v2 presents several new designs, including having the LLM attend to latent vectors for better pooled embedding output, and demonstrating a two-staged instruction tuning method to enhance the accuracy of both retrieval and non-retrieval tasks. Additionally, NV-Embed-v2 incorporates a novel hard-negative mining methods that take into account the positive relevance score for better false negatives removal. For more technical details, refer to our paper: [NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models](https://arxiv.org/pdf/2405.17428). ## Model Details - Base Decoder-only LLM: [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Pooling Type: Latent-Attention - Embedding Dimension: 4096 ## How to use Here is an example of how to encode queries and passages using Huggingface-transformer and Sentence-transformer. Please find the required package version [here](https://huggingface.co/nvidia/NV-Embed-v2#2-required-packages). ### Usage (HuggingFace Transformers) ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel # Each query needs to be accompanied by an corresponding instruction describing the task. task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",} query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: " queries = [ 'are judo throws allowed in wrestling?', 'how to become a radiology technician in michigan?' ] # No instruction needed for retrieval passages passage_prefix = "" passages = [ "Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.", "Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan." ] # load model with tokenizer model = AutoModel.from_pretrained('nvidia/NV-Embed-v2', trust_remote_code=True) # get the embeddings max_length = 32768 query_embeddings = model.encode(queries, instruction=query_prefix, max_length=max_length) passage_embeddings = model.encode(passages, instruction=passage_prefix, max_length=max_length) # normalize embeddings query_embeddings = F.normalize(query_embeddings, p=2, dim=1) passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1) # get the embeddings with DataLoader (spliting the datasets into multiple mini-batches) # batch_size=2 # query_embeddings = model._do_encode(queries, batch_size=batch_size, instruction=query_prefix, max_length=max_length, num_workers=32, return_numpy=True) # passage_embeddings = model._do_encode(passages, batch_size=batch_size, instruction=passage_prefix, max_length=max_length, num_workers=32, return_numpy=True) scores = (query_embeddings @ passage_embeddings.T) * 100 print(scores.tolist()) # [[87.42693328857422, 0.46283677220344543], [0.965264618396759, 86.03721618652344]] ``` ### Usage (Sentence-Transformers) ```python import torch from sentence_transformers import SentenceTransformer # Each query needs to be accompanied by an corresponding instruction describing the task. task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",} query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: " queries = [ 'are judo throws allowed in wrestling?', 'how to become a radiology technician in michigan?' ] # No instruction needed for retrieval passages passages = [ "Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.", "Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan." ] # load model with tokenizer model = SentenceTransformer('nvidia/NV-Embed-v2', trust_remote_code=True) model.max_seq_length = 32768 model.tokenizer.padding_side="right" def add_eos(input_examples): input_examples = [input_example + model.tokenizer.eos_token for input_example in input_examples] return input_examples # get the embeddings batch_size = 2 query_embeddings = model.encode(add_eos(queries), batch_size=batch_size, prompt=query_prefix, normalize_embeddings=True) passage_embeddings = model.encode(add_eos(passages), batch_size=batch_size, normalize_embeddings=True) scores = (query_embeddings @ passage_embeddings.T) * 100 print(scores.tolist()) ``` ## License This model should not be used for any commercial purpose. Refer the [license](https://spdx.org/licenses/CC-BY-NC-4.0) for the detailed terms. For commercial purpose, we recommend you to use the models of [NeMo Retriever Microservices (NIMs)](https://build.nvidia.com/explore/retrieval). ## Correspondence to Chankyu Lee ([email protected]), Wei Ping ([email protected]) ## Citation If you find this code useful in your research, please consider citing: ```bibtex @article{lee2024nv, title={NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models}, author={Lee, Chankyu and Roy, Rajarshi and Xu, Mengyao and Raiman, Jonathan and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei}, journal={arXiv preprint arXiv:2405.17428}, year={2024} } ``` ```bibtex @article{moreira2024nv, title={NV-Retriever: Improving text embedding models with effective hard-negative mining}, author={Moreira, Gabriel de Souza P and Osmulski, Radek and Xu, Mengyao and Ak, Ronay and Schifferer, Benedikt and Oldridge, Even}, journal={arXiv preprint arXiv:2407.15831}, year={2024} } ``` ## Troubleshooting #### 1. Instruction template for MTEB benchmarks For MTEB sub-tasks for retrieval, STS, summarization, please use the instruction prefix template in [instructions.json](https://huggingface.co/nvidia/NV-Embed-v2/blob/main/instructions.json). For classification, clustering and reranking, please use the instructions provided in Table. 7 in [NV-Embed paper](https://arxiv.org/pdf/2405.17428). #### 2. Required Packages If you have trouble, try installing the python packages as below ```python pip uninstall -y transformer-engine pip install torch==2.2.0 pip install transformers==4.42.4 pip install flash-attn==2.2.0 pip install sentence-transformers==2.7.0 ``` #### 3. How to enable Multi-GPU (Note, this is the case for HuggingFace Transformers) ```python from transformers import AutoModel from torch.nn import DataParallel embedding_model = AutoModel.from_pretrained("nvidia/NV-Embed-v2") for module_key, module in embedding_model._modules.items(): embedding_model._modules[module_key] = DataParallel(module) ``` #### 4. Fixing "nvidia/NV-Embed-v2 is not the path to a directory containing a file named config.json" Switch to your local model path,and open config.json and change the value of **"_name_or_path"** and replace it with your local model path. #### 5. Access to model nvidia/NV-Embed-v2 is restricted. You must be authenticated to access it Use your huggingface access [token](https://huggingface.co/settings/tokens) to execute *"huggingface-cli login"*. #### 6. How to resolve slight mismatch in Sentence transformer results. A slight mismatch in the Sentence Transformer implementation is caused by a discrepancy in the calculation of the instruction prefix length within the Sentence Transformer package. To fix this issue, you need to build the Sentence Transformer package from source, making the necessary modification in this [line](https://github.com/UKPLab/sentence-transformers/blob/v2.7-release/sentence_transformers/SentenceTransformer.py#L353) as below. ```python git clone https://github.com/UKPLab/sentence-transformers.git cd sentence-transformers git checkout v2.7-release # Modify L353 in SentenceTransformer.py to **'extra_features["prompt_length"] = tokenized_prompt["input_ids"].shape[-1]'**. pip install -e . ```
{"language": ["en"], "library_name": "transformers", "license": "cc-by-nc-4.0", "tags": ["mteb"], "model-index": [{"name": "NV-Embed-v2", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 94.28358208955224}, {"type": "accuracy_stderr", "value": 0.40076780842082305}, {"type": "ap", "value": 76.49097318319616}, {"type": "ap_stderr", "value": 1.2418692675183929}, {"type": "f1", "value": 91.41982003001168}, {"type": "f1_stderr", "value": 0.5043921413093579}, {"type": "main_score", "value": 94.28358208955224}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 97.74185000000001}, {"type": "accuracy_stderr", "value": 0.07420471683120942}, {"type": "ap", "value": 96.4737144875525}, {"type": "ap_stderr", "value": 0.2977518241541558}, {"type": "f1", "value": 97.7417581594921}, {"type": "f1_stderr", "value": 0.07428763617010377}, {"type": "main_score", "value": 97.74185000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 63.96000000000001}, {"type": "accuracy_stderr", "value": 1.815555011559825}, {"type": "f1", "value": 62.49361841640459}, {"type": "f1_stderr", "value": 2.829339314126457}, {"type": "main_score", "value": 63.96000000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "mteb/arguana", "config": "default", "split": "test", "revision": "c22ab2a51041ffd869aaddef7af8d8215647e41a"}, "metrics": [{"type": "map_at_1", "value": 46.515}, {"type": "map_at_10", "value": 62.392}, {"type": "map_at_100", "value": 62.732}, {"type": "map_at_1000", "value": 62.733000000000004}, {"type": "map_at_3", "value": 58.701}, {"type": "map_at_5", "value": 61.027}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 46.515}, {"type": "ndcg_at_10", "value": 70.074}, {"type": "ndcg_at_100", "value": 71.395}, {"type": "ndcg_at_1000", "value": 71.405}, {"type": "ndcg_at_3", "value": 62.643}, {"type": "ndcg_at_5", "value": 66.803}, {"type": "precision_at_1", "value": 46.515}, {"type": "precision_at_10", "value": 9.41}, {"type": "precision_at_100", "value": 0.996}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 24.68}, {"type": "precision_at_5", "value": 16.814}, {"type": "recall_at_1", "value": 46.515}, {"type": "recall_at_10", "value": 94.097}, {"type": "recall_at_100", "value": 99.57300000000001}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 74.03999999999999}, {"type": "recall_at_5", "value": 84.068}, {"type": "main_score", "value": 70.074}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "main_score", "value": 55.79933795955242}, {"type": "v_measure", "value": 55.79933795955242}, {"type": "v_measure_std", "value": 14.575108141916148}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "main_score", "value": 51.262845995850334}, {"type": "v_measure", "value": 51.262845995850334}, {"type": "v_measure_std", "value": 14.727824473104173}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 67.46477327480808}, {"type": "mrr", "value": 79.50160488941653}, {"type": "main_score", "value": 67.46477327480808}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cosine_pearson", "value": 89.74311007980987}, {"type": "cosine_spearman", "value": 87.41644967443246}, {"type": "manhattan_pearson", "value": 88.57457108347744}, {"type": "manhattan_spearman", "value": 87.59295972042997}, {"type": "euclidean_pearson", "value": 88.27108977118459}, {"type": "euclidean_spearman", "value": 87.41644967443246}, {"type": "main_score", "value": 87.41644967443246}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 92.41558441558443}, {"type": "accuracy_stderr", "value": 0.37701502251934443}, {"type": "f1", "value": 92.38130170447671}, {"type": "f1_stderr", "value": 0.39115151225617767}, {"type": "main_score", "value": 92.41558441558443}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "main_score", "value": 54.08649516394218}, {"type": "v_measure", "value": 54.08649516394218}, {"type": "v_measure_std", "value": 0.5303233693045373}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "main_score", "value": 49.60352214167779}, {"type": "v_measure", "value": 49.60352214167779}, {"type": "v_measure_std", "value": 0.7176198612516721}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "CQADupstackRetrieval_is_a_combined_dataset", "config": "default", "split": "test", "revision": "46989137a86843e03a6195de44b09deda022eec7"}, "metrics": [{"type": "map_at_1", "value": 31.913249999999998}, {"type": "map_at_10", "value": 43.87733333333334}, {"type": "map_at_100", "value": 45.249916666666664}, {"type": "map_at_1000", "value": 45.350583333333326}, {"type": "map_at_3", "value": 40.316833333333335}, {"type": "map_at_5", "value": 42.317083333333336}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 38.30616666666667}, {"type": "ndcg_at_10", "value": 50.24175000000001}, {"type": "ndcg_at_100", "value": 55.345333333333336}, {"type": "ndcg_at_1000", "value": 56.91225000000001}, {"type": "ndcg_at_3", "value": 44.67558333333333}, {"type": "ndcg_at_5", "value": 47.32333333333334}, {"type": "precision_at_1", "value": 38.30616666666667}, {"type": "precision_at_10", "value": 9.007416666666666}, {"type": "precision_at_100", "value": 1.3633333333333333}, {"type": "precision_at_1000", "value": 0.16691666666666666}, {"type": "precision_at_3", "value": 20.895666666666667}, {"type": "precision_at_5", "value": 14.871666666666666}, {"type": "recall_at_1", "value": 31.913249999999998}, {"type": "recall_at_10", "value": 64.11891666666666}, {"type": "recall_at_100", "value": 85.91133333333333}, {"type": "recall_at_1000", "value": 96.28225}, {"type": "recall_at_3", "value": 48.54749999999999}, {"type": "recall_at_5", "value": 55.44283333333334}, {"type": "main_score", "value": 50.24175000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "mteb/climate-fever", "config": "default", "split": "test", "revision": "47f2ac6acb640fc46020b02a5b59fdda04d39380"}, "metrics": [{"type": "map_at_1", "value": 19.556}, {"type": "map_at_10", "value": 34.623}, {"type": "map_at_100", "value": 36.97}, {"type": "map_at_1000", "value": 37.123}, {"type": "map_at_3", "value": 28.904999999999998}, {"type": "map_at_5", "value": 31.955}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 44.104}, {"type": "ndcg_at_10", "value": 45.388}, {"type": "ndcg_at_100", "value": 52.793}, {"type": "ndcg_at_1000", "value": 55.108999999999995}, {"type": "ndcg_at_3", "value": 38.604}, {"type": "ndcg_at_5", "value": 40.806}, {"type": "precision_at_1", "value": 44.104}, {"type": "precision_at_10", "value": 14.143}, {"type": "precision_at_100", "value": 2.2190000000000003}, {"type": "precision_at_1000", "value": 0.266}, {"type": "precision_at_3", "value": 29.316}, {"type": "precision_at_5", "value": 21.98}, {"type": "recall_at_1", "value": 19.556}, {"type": "recall_at_10", "value": 52.120999999999995}, {"type": "recall_at_100", "value": 76.509}, {"type": "recall_at_1000", "value": 89.029}, {"type": "recall_at_3", "value": 34.919}, {"type": "recall_at_5", "value": 42.18}, {"type": "main_score", "value": 45.388}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "mteb/dbpedia", "config": "default", "split": "test", "revision": "c0f706b76e590d620bd6618b3ca8efdd34e2d659"}, "metrics": [{"type": "map_at_1", "value": 10.714}, {"type": "map_at_10", "value": 25.814999999999998}, {"type": "map_at_100", "value": 37.845}, {"type": "map_at_1000", "value": 39.974}, {"type": "map_at_3", "value": 17.201}, {"type": "map_at_5", "value": 21.062}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 66.0}, {"type": "ndcg_at_10", "value": 53.496}, {"type": "ndcg_at_100", "value": 58.053}, {"type": "ndcg_at_1000", "value": 64.886}, {"type": "ndcg_at_3", "value": 57.656}, {"type": "ndcg_at_5", "value": 55.900000000000006}, {"type": "precision_at_1", "value": 77.25}, {"type": "precision_at_10", "value": 43.65}, {"type": "precision_at_100", "value": 13.76}, {"type": "precision_at_1000", "value": 2.5940000000000003}, {"type": "precision_at_3", "value": 61.0}, {"type": "precision_at_5", "value": 54.65}, {"type": "recall_at_1", "value": 10.714}, {"type": "recall_at_10", "value": 31.173000000000002}, {"type": "recall_at_100", "value": 63.404}, {"type": "recall_at_1000", "value": 85.874}, {"type": "recall_at_3", "value": 18.249000000000002}, {"type": "recall_at_5", "value": 23.69}, {"type": "main_score", "value": 53.496}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 93.38499999999999}, {"type": "accuracy_stderr", "value": 0.13793114224133846}, {"type": "f1", "value": 90.12141028353496}, {"type": "f1_stderr", "value": 0.174640257706043}, {"type": "main_score", "value": 93.38499999999999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "mteb/fever", "config": "default", "split": "test", "revision": "bea83ef9e8fb933d90a2f1d5515737465d613e12"}, "metrics": [{"type": "map_at_1", "value": 84.66900000000001}, {"type": "map_at_10", "value": 91.52799999999999}, {"type": "map_at_100", "value": 91.721}, {"type": "map_at_1000", "value": 91.73}, {"type": "map_at_3", "value": 90.752}, {"type": "map_at_5", "value": 91.262}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 91.20899999999999}, {"type": "ndcg_at_10", "value": 93.74900000000001}, {"type": "ndcg_at_100", "value": 94.279}, {"type": "ndcg_at_1000", "value": 94.408}, {"type": "ndcg_at_3", "value": 92.923}, {"type": "ndcg_at_5", "value": 93.376}, {"type": "precision_at_1", "value": 91.20899999999999}, {"type": "precision_at_10", "value": 11.059}, {"type": "precision_at_100", "value": 1.1560000000000001}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 35.129}, {"type": "precision_at_5", "value": 21.617}, {"type": "recall_at_1", "value": 84.66900000000001}, {"type": "recall_at_10", "value": 97.03399999999999}, {"type": "recall_at_100", "value": 98.931}, {"type": "recall_at_1000", "value": 99.65899999999999}, {"type": "recall_at_3", "value": 94.76299999999999}, {"type": "recall_at_5", "value": 95.968}, {"type": "main_score", "value": 93.74900000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "mteb/fiqa", "config": "default", "split": "test", "revision": "27a168819829fe9bcd655c2df245fb19452e8e06"}, "metrics": [{"type": "map_at_1", "value": 34.866}, {"type": "map_at_10", "value": 58.06099999999999}, {"type": "map_at_100", "value": 60.028999999999996}, {"type": "map_at_1000", "value": 60.119}, {"type": "map_at_3", "value": 51.304}, {"type": "map_at_5", "value": 55.054}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 64.815}, {"type": "ndcg_at_10", "value": 65.729}, {"type": "ndcg_at_100", "value": 71.14}, {"type": "ndcg_at_1000", "value": 72.336}, {"type": "ndcg_at_3", "value": 61.973}, {"type": "ndcg_at_5", "value": 62.858000000000004}, {"type": "precision_at_1", "value": 64.815}, {"type": "precision_at_10", "value": 17.87}, {"type": "precision_at_100", "value": 2.373}, {"type": "precision_at_1000", "value": 0.258}, {"type": "precision_at_3", "value": 41.152}, {"type": "precision_at_5", "value": 29.568}, {"type": "recall_at_1", "value": 34.866}, {"type": "recall_at_10", "value": 72.239}, {"type": "recall_at_100", "value": 91.19}, {"type": "recall_at_1000", "value": 98.154}, {"type": "recall_at_3", "value": 56.472}, {"type": "recall_at_5", "value": 63.157}, {"type": "main_score", "value": 65.729}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "mteb/hotpotqa", "config": "default", "split": "test", "revision": "ab518f4d6fcca38d87c25209f94beba119d02014"}, "metrics": [{"type": "map_at_1", "value": 44.651999999999994}, {"type": "map_at_10", "value": 79.95100000000001}, {"type": "map_at_100", "value": 80.51700000000001}, {"type": "map_at_1000", "value": 80.542}, {"type": "map_at_3", "value": 77.008}, {"type": "map_at_5", "value": 78.935}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 89.305}, {"type": "ndcg_at_10", "value": 85.479}, {"type": "ndcg_at_100", "value": 87.235}, {"type": "ndcg_at_1000", "value": 87.669}, {"type": "ndcg_at_3", "value": 81.648}, {"type": "ndcg_at_5", "value": 83.88600000000001}, {"type": "precision_at_1", "value": 89.305}, {"type": "precision_at_10", "value": 17.807000000000002}, {"type": "precision_at_100", "value": 1.9140000000000001}, {"type": "precision_at_1000", "value": 0.197}, {"type": "precision_at_3", "value": 53.756}, {"type": "precision_at_5", "value": 34.018}, {"type": "recall_at_1", "value": 44.651999999999994}, {"type": "recall_at_10", "value": 89.034}, {"type": "recall_at_100", "value": 95.719}, {"type": "recall_at_1000", "value": 98.535}, {"type": "recall_at_3", "value": 80.635}, {"type": "recall_at_5", "value": 85.044}, {"type": "main_score", "value": 85.479}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 97.1376}, {"type": "accuracy_stderr", "value": 0.04571914259913447}, {"type": "ap", "value": 95.92783808558808}, {"type": "ap_stderr", "value": 0.05063782483358255}, {"type": "f1", "value": 97.13755519177172}, {"type": "f1_stderr", "value": 0.04575943074086138}, {"type": "main_score", "value": 97.1376}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "mteb/msmarco", "config": "default", "split": "dev", "revision": "c5a29a104738b98a9e76336939199e264163d4a0"}, "metrics": [{"type": "map_at_1", "value": 0.0}, {"type": "map_at_10", "value": 38.342}, {"type": "map_at_100", "value": 0.0}, {"type": "map_at_1000", "value": 0.0}, {"type": "map_at_3", "value": 0.0}, {"type": "map_at_5", "value": 0.0}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 0.0}, {"type": "ndcg_at_10", "value": 45.629999999999995}, {"type": "ndcg_at_100", "value": 0.0}, {"type": "ndcg_at_1000", "value": 0.0}, {"type": "ndcg_at_3", "value": 0.0}, {"type": "ndcg_at_5", "value": 0.0}, {"type": "precision_at_1", "value": 0.0}, {"type": "precision_at_10", "value": 7.119000000000001}, {"type": "precision_at_100", "value": 0.0}, {"type": "precision_at_1000", "value": 0.0}, {"type": "precision_at_3", "value": 0.0}, {"type": "precision_at_5", "value": 0.0}, {"type": "recall_at_1", "value": 0.0}, {"type": "recall_at_10", "value": 67.972}, {"type": "recall_at_100", "value": 0.0}, {"type": "recall_at_1000", "value": 0.0}, {"type": "recall_at_3", "value": 0.0}, {"type": "recall_at_5", "value": 0.0}, {"type": "main_score", "value": 45.629999999999995}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 99.24988600091199}, {"type": "accuracy_stderr", "value": 0.04496826931900734}, {"type": "f1", "value": 99.15933275095276}, {"type": "f1_stderr", "value": 0.05565039139747446}, {"type": "main_score", "value": 99.24988600091199}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 94.3684450524396}, {"type": "accuracy_stderr", "value": 0.8436548701322188}, {"type": "f1", "value": 77.33022623133307}, {"type": "f1_stderr", "value": 0.9228425861187275}, {"type": "main_score", "value": 94.3684450524396}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 86.09616677874916}, {"type": "accuracy_stderr", "value": 0.9943208055590853}, {"type": "f1", "value": 83.4902056490062}, {"type": "f1_stderr", "value": 0.7626189310074184}, {"type": "main_score", "value": 86.09616677874916}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 92.17215870880968}, {"type": "accuracy_stderr", "value": 0.25949941333658166}, {"type": "f1", "value": 91.36757392422702}, {"type": "f1_stderr", "value": 0.29139507298154815}, {"type": "main_score", "value": 92.17215870880968}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "main_score", "value": 46.09497344077905}, {"type": "v_measure", "value": 46.09497344077905}, {"type": "v_measure_std", "value": 1.44871520869784}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "main_score", "value": 44.861049989560684}, {"type": "v_measure", "value": 44.861049989560684}, {"type": "v_measure_std", "value": 1.432199293162203}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.75936162919999}, {"type": "mrr", "value": 32.966812736541236}, {"type": "main_score", "value": 31.75936162919999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "mteb/nfcorpus", "config": "default", "split": "test", "revision": "ec0fa4fe99da2ff19ca1214b7966684033a58814"}, "metrics": [{"type": "map_at_1", "value": 7.893999999999999}, {"type": "map_at_10", "value": 17.95}, {"type": "map_at_100", "value": 23.474}, {"type": "map_at_1000", "value": 25.412000000000003}, {"type": "map_at_3", "value": 12.884}, {"type": "map_at_5", "value": 15.171000000000001}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 55.728}, {"type": "ndcg_at_10", "value": 45.174}, {"type": "ndcg_at_100", "value": 42.18}, {"type": "ndcg_at_1000", "value": 50.793}, {"type": "ndcg_at_3", "value": 50.322}, {"type": "ndcg_at_5", "value": 48.244}, {"type": "precision_at_1", "value": 57.276}, {"type": "precision_at_10", "value": 33.437}, {"type": "precision_at_100", "value": 10.671999999999999}, {"type": "precision_at_1000", "value": 2.407}, {"type": "precision_at_3", "value": 46.646}, {"type": "precision_at_5", "value": 41.672}, {"type": "recall_at_1", "value": 7.893999999999999}, {"type": "recall_at_10", "value": 22.831000000000003}, {"type": "recall_at_100", "value": 43.818}, {"type": "recall_at_1000", "value": 75.009}, {"type": "recall_at_3", "value": 14.371}, {"type": "recall_at_5", "value": 17.752000000000002}, {"type": "main_score", "value": 45.174}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "mteb/nq", "config": "default", "split": "test", "revision": "b774495ed302d8c44a3a7ea25c90dbce03968f31"}, "metrics": [{"type": "map_at_1", "value": 49.351}, {"type": "map_at_10", "value": 66.682}, {"type": "map_at_100", "value": 67.179}, {"type": "map_at_1000", "value": 67.18499999999999}, {"type": "map_at_3", "value": 62.958999999999996}, {"type": "map_at_5", "value": 65.364}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 55.417}, {"type": "ndcg_at_10", "value": 73.568}, {"type": "ndcg_at_100", "value": 75.35}, {"type": "ndcg_at_1000", "value": 75.478}, {"type": "ndcg_at_3", "value": 67.201}, {"type": "ndcg_at_5", "value": 70.896}, {"type": "precision_at_1", "value": 55.417}, {"type": "precision_at_10", "value": 11.036999999999999}, {"type": "precision_at_100", "value": 1.204}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_3", "value": 29.654000000000003}, {"type": "precision_at_5", "value": 20.006}, {"type": "recall_at_1", "value": 49.351}, {"type": "recall_at_10", "value": 91.667}, {"type": "recall_at_100", "value": 98.89}, {"type": "recall_at_1000", "value": 99.812}, {"type": "recall_at_3", "value": 75.715}, {"type": "recall_at_5", "value": 84.072}, {"type": "main_score", "value": 73.568}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "mteb/quora", "config": "default", "split": "test", "revision": "e4e08e0b7dbe3c8700f0daef558ff32256715259"}, "metrics": [{"type": "map_at_1", "value": 71.358}, {"type": "map_at_10", "value": 85.474}, {"type": "map_at_100", "value": 86.101}, {"type": "map_at_1000", "value": 86.114}, {"type": "map_at_3", "value": 82.562}, {"type": "map_at_5", "value": 84.396}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 82.12}, {"type": "ndcg_at_10", "value": 89.035}, {"type": "ndcg_at_100", "value": 90.17399999999999}, {"type": "ndcg_at_1000", "value": 90.243}, {"type": "ndcg_at_3", "value": 86.32300000000001}, {"type": "ndcg_at_5", "value": 87.85}, {"type": "precision_at_1", "value": 82.12}, {"type": "precision_at_10", "value": 13.55}, {"type": "precision_at_100", "value": 1.54}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.89}, {"type": "precision_at_5", "value": 24.9}, {"type": "recall_at_1", "value": 71.358}, {"type": "recall_at_10", "value": 95.855}, {"type": "recall_at_100", "value": 99.711}, {"type": "recall_at_1000", "value": 99.994}, {"type": "recall_at_3", "value": 88.02}, {"type": "recall_at_5", "value": 92.378}, {"type": "main_score", "value": 89.035}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "main_score", "value": 71.0984522742521}, {"type": "v_measure", "value": 71.0984522742521}, {"type": "v_measure_std", "value": 3.5668139917058044}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "main_score", "value": 74.94499641904133}, {"type": "v_measure", "value": 74.94499641904133}, {"type": "v_measure_std", "value": 11.419672879389248}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "mteb/scidocs", "config": "default", "split": "test", "revision": "f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88"}, "metrics": [{"type": "map_at_1", "value": 5.343}, {"type": "map_at_10", "value": 13.044}, {"type": "map_at_100", "value": 15.290999999999999}, {"type": "map_at_1000", "value": 15.609}, {"type": "map_at_3", "value": 9.227}, {"type": "map_at_5", "value": 11.158}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 26.3}, {"type": "ndcg_at_10", "value": 21.901}, {"type": "ndcg_at_100", "value": 30.316}, {"type": "ndcg_at_1000", "value": 35.547000000000004}, {"type": "ndcg_at_3", "value": 20.560000000000002}, {"type": "ndcg_at_5", "value": 18.187}, {"type": "precision_at_1", "value": 26.3}, {"type": "precision_at_10", "value": 11.34}, {"type": "precision_at_100", "value": 2.344}, {"type": "precision_at_1000", "value": 0.359}, {"type": "precision_at_3", "value": 18.967}, {"type": "precision_at_5", "value": 15.920000000000002}, {"type": "recall_at_1", "value": 5.343}, {"type": "recall_at_10", "value": 22.997}, {"type": "recall_at_100", "value": 47.562}, {"type": "recall_at_1000", "value": 72.94500000000001}, {"type": "recall_at_3", "value": 11.533}, {"type": "recall_at_5", "value": 16.148}, {"type": "main_score", "value": 21.901}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cosine_pearson", "value": 87.3054603493591}, {"type": "cosine_spearman", "value": 82.14763206055602}, {"type": "manhattan_pearson", "value": 84.78737790237557}, {"type": "manhattan_spearman", "value": 81.88455356002758}, {"type": "euclidean_pearson", "value": 85.00668629311117}, {"type": "euclidean_spearman", "value": 82.14763037860851}, {"type": "main_score", "value": 82.14763206055602}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cosine_pearson", "value": 86.6911864687294}, {"type": "cosine_spearman", "value": 77.89286260403269}, {"type": "manhattan_pearson", "value": 82.87240347680857}, {"type": "manhattan_spearman", "value": 78.10055393740326}, {"type": "euclidean_pearson", "value": 82.72282535777123}, {"type": "euclidean_spearman", "value": 77.89256648406325}, {"type": "main_score", "value": 77.89286260403269}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cosine_pearson", "value": 87.7220832598633}, {"type": "cosine_spearman", "value": 88.30238972017452}, {"type": "manhattan_pearson", "value": 87.88214789140248}, {"type": "manhattan_spearman", "value": 88.24770220032391}, {"type": "euclidean_pearson", "value": 87.98610386257103}, {"type": "euclidean_spearman", "value": 88.30238972017452}, {"type": "main_score", "value": 88.30238972017452}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cosine_pearson", "value": 85.70614623247714}, {"type": "cosine_spearman", "value": 84.29920990970672}, {"type": "manhattan_pearson", "value": 84.9836190531721}, {"type": "manhattan_spearman", "value": 84.40933470597638}, {"type": "euclidean_pearson", "value": 84.96652336693347}, {"type": "euclidean_spearman", "value": 84.29920989531965}, {"type": "main_score", "value": 84.29920990970672}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cosine_pearson", "value": 88.4169972425264}, {"type": "cosine_spearman", "value": 89.03555007807218}, {"type": "manhattan_pearson", "value": 88.83068699455478}, {"type": "manhattan_spearman", "value": 89.21877175674125}, {"type": "euclidean_pearson", "value": 88.7251052947544}, {"type": "euclidean_spearman", "value": 89.03557389893083}, {"type": "main_score", "value": 89.03555007807218}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cosine_pearson", "value": 85.63830579034632}, {"type": "cosine_spearman", "value": 86.77353371581373}, {"type": "manhattan_pearson", "value": 86.24830492396637}, {"type": "manhattan_spearman", "value": 86.96754348626189}, {"type": "euclidean_pearson", "value": 86.09837038778359}, {"type": "euclidean_spearman", "value": 86.77353371581373}, {"type": "main_score", "value": 86.77353371581373}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cosine_pearson", "value": 91.2204675588959}, {"type": "cosine_spearman", "value": 90.66976712249057}, {"type": "manhattan_pearson", "value": 91.11007808242346}, {"type": "manhattan_spearman", "value": 90.51739232964488}, {"type": "euclidean_pearson", "value": 91.19588941007903}, {"type": "euclidean_spearman", "value": 90.66976712249057}, {"type": "main_score", "value": 90.66976712249057}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cosine_pearson", "value": 69.34416749707114}, {"type": "cosine_spearman", "value": 68.11632448161046}, {"type": "manhattan_pearson", "value": 68.99243488935281}, {"type": "manhattan_spearman", "value": 67.8398546438258}, {"type": "euclidean_pearson", "value": 69.06376010216088}, {"type": "euclidean_spearman", "value": 68.11632448161046}, {"type": "main_score", "value": 68.11632448161046}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cosine_pearson", "value": 88.10309739429758}, {"type": "cosine_spearman", "value": 88.40520383147418}, {"type": "manhattan_pearson", "value": 88.50753383813232}, {"type": "manhattan_spearman", "value": 88.66382629460927}, {"type": "euclidean_pearson", "value": 88.35050664609376}, {"type": "euclidean_spearman", "value": 88.40520383147418}, {"type": "main_score", "value": 88.40520383147418}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 87.58627126942797}, {"type": "mrr", "value": 97.01098103058887}, {"type": "main_score", "value": 87.58627126942797}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "mteb/scifact", "config": "default", "split": "test", "revision": "0228b52cf27578f30900b9e5271d331663a030d7"}, "metrics": [{"type": "map_at_1", "value": 62.883}, {"type": "map_at_10", "value": 75.371}, {"type": "map_at_100", "value": 75.66000000000001}, {"type": "map_at_1000", "value": 75.667}, {"type": "map_at_3", "value": 72.741}, {"type": "map_at_5", "value": 74.74}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 66.0}, {"type": "ndcg_at_10", "value": 80.12700000000001}, {"type": "ndcg_at_100", "value": 81.291}, {"type": "ndcg_at_1000", "value": 81.464}, {"type": "ndcg_at_3", "value": 76.19}, {"type": "ndcg_at_5", "value": 78.827}, {"type": "precision_at_1", "value": 66.0}, {"type": "precision_at_10", "value": 10.567}, {"type": "precision_at_100", "value": 1.117}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 30.333}, {"type": "precision_at_5", "value": 20.133000000000003}, {"type": "recall_at_1", "value": 62.883}, {"type": "recall_at_10", "value": 93.556}, {"type": "recall_at_100", "value": 98.667}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 83.322}, {"type": "recall_at_5", "value": 89.756}, {"type": "main_score", "value": 80.12700000000001}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.87524752475248}, {"type": "cos_sim_accuracy_threshold", "value": 74.86587762832642}, {"type": "cos_sim_ap", "value": 97.02222446606328}, {"type": "cos_sim_f1", "value": 93.66197183098592}, {"type": "cos_sim_f1_threshold", "value": 74.74223375320435}, {"type": "cos_sim_precision", "value": 94.23076923076923}, {"type": "cos_sim_recall", "value": 93.10000000000001}, {"type": "dot_accuracy", "value": 99.87524752475248}, {"type": "dot_accuracy_threshold", "value": 74.86587762832642}, {"type": "dot_ap", "value": 97.02222688043362}, {"type": "dot_f1", "value": 93.66197183098592}, {"type": "dot_f1_threshold", "value": 74.74223375320435}, {"type": "dot_precision", "value": 94.23076923076923}, {"type": "dot_recall", "value": 93.10000000000001}, {"type": "euclidean_accuracy", "value": 99.87524752475248}, {"type": "euclidean_accuracy_threshold", "value": 70.9000825881958}, {"type": "euclidean_ap", "value": 97.02222446606329}, {"type": "euclidean_f1", "value": 93.66197183098592}, {"type": "euclidean_f1_threshold", "value": 71.07426524162292}, {"type": "euclidean_precision", "value": 94.23076923076923}, {"type": "euclidean_recall", "value": 93.10000000000001}, {"type": "manhattan_accuracy", "value": 99.87623762376238}, {"type": "manhattan_accuracy_threshold", "value": 3588.5040283203125}, {"type": "manhattan_ap", "value": 97.09194643777883}, {"type": "manhattan_f1", "value": 93.7375745526839}, {"type": "manhattan_f1_threshold", "value": 3664.3760681152344}, {"type": "manhattan_precision", "value": 93.18181818181817}, {"type": "manhattan_recall", "value": 94.3}, {"type": "max_accuracy", "value": 99.87623762376238}, {"type": "max_ap", "value": 97.09194643777883}, {"type": "max_f1", "value": 93.7375745526839}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "main_score", "value": 82.10134099988541}, {"type": "v_measure", "value": 82.10134099988541}, {"type": "v_measure_std", "value": 2.7926349897769533}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "main_score", "value": 48.357450742397404}, {"type": "v_measure", "value": 48.357450742397404}, {"type": "v_measure_std", "value": 1.520118876440547}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 55.79277200802986}, {"type": "mrr", "value": 56.742517082590616}, {"type": "main_score", "value": 55.79277200802986}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cosine_spearman", "value": 30.701215774712693}, {"type": "cosine_pearson", "value": 31.26740037278488}, {"type": "dot_spearman", "value": 30.701215774712693}, {"type": "dot_pearson", "value": 31.267404144879997}, {"type": "main_score", "value": 30.701215774712693}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "mteb/trec-covid", "config": "default", "split": "test", "revision": "bb9466bac8153a0349341eb1b22e06409e78ef4e"}, "metrics": [{"type": "map_at_1", "value": 0.23800000000000002}, {"type": "map_at_10", "value": 2.31}, {"type": "map_at_100", "value": 15.495000000000001}, {"type": "map_at_1000", "value": 38.829}, {"type": "map_at_3", "value": 0.72}, {"type": "map_at_5", "value": 1.185}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 91.0}, {"type": "ndcg_at_10", "value": 88.442}, {"type": "ndcg_at_100", "value": 71.39}, {"type": "ndcg_at_1000", "value": 64.153}, {"type": "ndcg_at_3", "value": 89.877}, {"type": "ndcg_at_5", "value": 89.562}, {"type": "precision_at_1", "value": 92.0}, {"type": "precision_at_10", "value": 92.60000000000001}, {"type": "precision_at_100", "value": 73.74000000000001}, {"type": "precision_at_1000", "value": 28.222}, {"type": "precision_at_3", "value": 94.0}, {"type": "precision_at_5", "value": 93.60000000000001}, {"type": "recall_at_1", "value": 0.23800000000000002}, {"type": "recall_at_10", "value": 2.428}, {"type": "recall_at_100", "value": 18.099999999999998}, {"type": "recall_at_1000", "value": 60.79599999999999}, {"type": "recall_at_3", "value": 0.749}, {"type": "recall_at_5", "value": 1.238}, {"type": "main_score", "value": 88.442}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "mteb/touche2020", "config": "default", "split": "test", "revision": "a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f"}, "metrics": [{"type": "map_at_1", "value": 3.4939999999999998}, {"type": "map_at_10", "value": 12.531999999999998}, {"type": "map_at_100", "value": 19.147}, {"type": "map_at_1000", "value": 20.861}, {"type": "map_at_3", "value": 7.558}, {"type": "map_at_5", "value": 9.49}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.0}, {"type": "mrr_at_100", "value": 0.0}, {"type": "mrr_at_1000", "value": 0.0}, {"type": "mrr_at_3", "value": 0.0}, {"type": "mrr_at_5", "value": 0.0}, {"type": "ndcg_at_1", "value": 47.959}, {"type": "ndcg_at_10", "value": 31.781}, {"type": "ndcg_at_100", "value": 42.131}, {"type": "ndcg_at_1000", "value": 53.493}, {"type": "ndcg_at_3", "value": 39.204}, {"type": "ndcg_at_5", "value": 34.635}, {"type": "precision_at_1", "value": 48.980000000000004}, {"type": "precision_at_10", "value": 27.143}, {"type": "precision_at_100", "value": 8.224}, {"type": "precision_at_1000", "value": 1.584}, {"type": "precision_at_3", "value": 38.775999999999996}, {"type": "precision_at_5", "value": 33.061}, {"type": "recall_at_1", "value": 3.4939999999999998}, {"type": "recall_at_10", "value": 18.895}, {"type": "recall_at_100", "value": 50.192}, {"type": "recall_at_1000", "value": 85.167}, {"type": "recall_at_3", "value": 8.703}, {"type": "recall_at_5", "value": 11.824}, {"type": "main_score", "value": 31.781}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 92.7402}, {"type": "accuracy_stderr", "value": 1.020764595781027}, {"type": "ap", "value": 44.38594756333084}, {"type": "ap_stderr", "value": 1.817150701258273}, {"type": "f1", "value": 79.95699280019547}, {"type": "f1_stderr", "value": 1.334582498702029}, {"type": "main_score", "value": 92.7402}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 80.86870401810978}, {"type": "accuracy_stderr", "value": 0.22688467782004712}, {"type": "f1", "value": 81.1829040745744}, {"type": "f1_stderr", "value": 0.19774920574849694}, {"type": "main_score", "value": 80.86870401810978}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "main_score", "value": 64.82048869927482}, {"type": "v_measure", "value": 64.82048869927482}, {"type": "v_measure_std", "value": 0.9170394252450564}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.44251057996067}, {"type": "cos_sim_accuracy_threshold", "value": 70.2150285243988}, {"type": "cos_sim_ap", "value": 81.11422351199913}, {"type": "cos_sim_f1", "value": 73.71062868615887}, {"type": "cos_sim_f1_threshold", "value": 66.507488489151}, {"type": "cos_sim_precision", "value": 70.2799712849964}, {"type": "cos_sim_recall", "value": 77.4934036939314}, {"type": "dot_accuracy", "value": 88.44251057996067}, {"type": "dot_accuracy_threshold", "value": 70.2150285243988}, {"type": "dot_ap", "value": 81.11420529068658}, {"type": "dot_f1", "value": 73.71062868615887}, {"type": "dot_f1_threshold", "value": 66.50749444961548}, {"type": "dot_precision", "value": 70.2799712849964}, {"type": "dot_recall", "value": 77.4934036939314}, {"type": "euclidean_accuracy", "value": 88.44251057996067}, {"type": "euclidean_accuracy_threshold", "value": 77.18156576156616}, {"type": "euclidean_ap", "value": 81.11422421732487}, {"type": "euclidean_f1", "value": 73.71062868615887}, {"type": "euclidean_f1_threshold", "value": 81.84436559677124}, {"type": "euclidean_precision", "value": 70.2799712849964}, {"type": "euclidean_recall", "value": 77.4934036939314}, {"type": "manhattan_accuracy", "value": 88.26369434344639}, {"type": "manhattan_accuracy_threshold", "value": 3837.067413330078}, {"type": "manhattan_ap", "value": 80.81442360477725}, {"type": "manhattan_f1", "value": 73.39883099117024}, {"type": "manhattan_f1_threshold", "value": 4098.833847045898}, {"type": "manhattan_precision", "value": 69.41896024464832}, {"type": "manhattan_recall", "value": 77.86279683377309}, {"type": "max_accuracy", "value": 88.44251057996067}, {"type": "max_ap", "value": 81.11422421732487}, {"type": "max_f1", "value": 73.71062868615887}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 90.03182365040556}, {"type": "cos_sim_accuracy_threshold", "value": 64.46443796157837}, {"type": "cos_sim_ap", "value": 87.86649113691112}, {"type": "cos_sim_f1", "value": 80.45644844577821}, {"type": "cos_sim_f1_threshold", "value": 61.40774488449097}, {"type": "cos_sim_precision", "value": 77.54052702992216}, {"type": "cos_sim_recall", "value": 83.60024638127503}, {"type": "dot_accuracy", "value": 90.03182365040556}, {"type": "dot_accuracy_threshold", "value": 64.46444988250732}, {"type": "dot_ap", "value": 87.86649011954319}, {"type": "dot_f1", "value": 80.45644844577821}, {"type": "dot_f1_threshold", "value": 61.407750844955444}, {"type": "dot_precision", "value": 77.54052702992216}, {"type": "dot_recall", "value": 83.60024638127503}, {"type": "euclidean_accuracy", "value": 90.03182365040556}, {"type": "euclidean_accuracy_threshold", "value": 84.30368900299072}, {"type": "euclidean_ap", "value": 87.86649114275045}, {"type": "euclidean_f1", "value": 80.45644844577821}, {"type": "euclidean_f1_threshold", "value": 87.8547191619873}, {"type": "euclidean_precision", "value": 77.54052702992216}, {"type": "euclidean_recall", "value": 83.60024638127503}, {"type": "manhattan_accuracy", "value": 89.99883572010712}, {"type": "manhattan_accuracy_threshold", "value": 4206.838607788086}, {"type": "manhattan_ap", "value": 87.8600826607838}, {"type": "manhattan_f1", "value": 80.44054508120217}, {"type": "manhattan_f1_threshold", "value": 4372.755432128906}, {"type": "manhattan_precision", "value": 78.08219178082192}, {"type": "manhattan_recall", "value": 82.94579611949491}, {"type": "max_accuracy", "value": 90.03182365040556}, {"type": "max_ap", "value": 87.86649114275045}, {"type": "max_f1", "value": 80.45644844577821}]}]}]}
dataset
null
532
RichardErkhov/alonzogarbanzo_-_Bloom-1b7-creative-writing-IT-baseline-8bits
RichardErkhov
null
[ "safetensors", "bloom", "8-bit", "bitsandbytes", "region:us" ]
2025-03-03T21:47:27Z
2025-03-03T21:48:56+00:00
5
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Bloom-1b7-creative-writing-IT-baseline - bnb 8bits - Model creator: https://huggingface.co/alonzogarbanzo/ - Original model: https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline/ Original model description: --- license: bigscience-bloom-rail-1.0 base_model: bigscience/bloom-1b7 tags: - generated_from_trainer model-index: - name: Bloom-1b7-creative-writing-IT results: [] --- # Bloom-1b7-creative-writing-IT This model is a fine-tuned version of [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) on an a creative writing - short story dataset. https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/creative ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Training and evaluation data here: https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/creative ## Training procedure The model was instruction tuned on the dataset in the following way: Given the set of promts: ``` python prompts = [ "Write a creative short story based on the following title:", "Here is a title for a story. Craft a short narrative around it:", "Using the title given, develop a short story:", "Imagine a short story that starts with this title:", "Create a brief story with the following title:" ] ``` each training example is generated by concatenating one of the prompts with the 'title' and 'selftext' in the following way: ``` python concatenated_texts = [random.choice(prompts) + " " + title + "</s>" + "Story: " + selftext for title, selftext in zip(titles, selftexts)] ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results Final reported loss: {'loss': 0.0135, 'grad_norm': 0.6041152477264404, 'learning_rate': 7.446808510638299e-07, 'epoch': 9.89} Average over tuning: {'train_runtime': 1111.4187, 'train_samples_per_second': 1.71, 'train_steps_per_second': 0.423, 'train_loss': 0.4682149670225509, 'epoch': 9.89} ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "CRAFT" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Bloom-1b7-creative-writing-IT-baseline - bnb 8bits - Model creator: https://huggingface.co/alonzogarbanzo/ - Original model: https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline/ Original model description: --- license: bigscience-bloom-rail-1.0 base_model: bigscience/bloom-1b7 tags: - generated_from_trainer model-index: - name: Bloom-1b7-creative-writing-IT results: [] --- # Bloom-1b7-creative-writing-IT This model is a fine-tuned version of [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) on an a creative writing - short story dataset. https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/creative ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Training and evaluation data here: https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/creative ## Training procedure The model was instruction tuned on the dataset in the following way: Given the set of promts: ``` python prompts = [ "Write a creative short story based on the following title:", "Here is a title for a story. Craft a short narrative around it:", "Using the title given, develop a short story:", "Imagine a short story that starts with this title:", "Create a brief story with the following title:" ] ``` each training example is generated by concatenating one of the prompts with the 'title' and 'selftext' in the following way: ``` python concatenated_texts = [random.choice(prompts) + " " + title + "</s>" + "Story: " + selftext for title, selftext in zip(titles, selftexts)] ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results Final reported loss: {'loss': 0.0135, 'grad_norm': 0.6041152477264404, 'learning_rate': 7.446808510638299e-07, 'epoch': 9.89} Average over tuning: {'train_runtime': 1111.4187, 'train_samples_per_second': 1.71, 'train_steps_per_second': 0.423, 'train_loss': 0.4682149670225509, 'epoch': 9.89} ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{}
dataset
null
533
RichardErkhov/ayjays132_-_CustomGPT2Conversational-4bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2024-05-03T23:04:19Z
2024-05-03T23:04:50+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) CustomGPT2Conversational - bnb 4bits - Model creator: https://huggingface.co/ayjays132/ - Original model: https://huggingface.co/ayjays132/CustomGPT2Conversational/ Original model description: --- _name_or_path: CustomGPT2ConversationalModel torch_dtype: float32 transformers_version: 4.37.2 language: en license: apache-2.0 metrics: - perplexity - accuracy widget: - text: Write a story about a time-traveling detective in Elizabethan England. - text: Write a poem in the style of Pablo Neruda about the night sky. - text: Write a press release about a new technology for real-time language translation. - text: Write a conversation between a human and an alien meeting in space. - text: Write an essay about the impact of social media on society from a 22nd-century perspective. - text: Write a speech for the first human on Mars to address the global audience on Earth. - text: Write a story about children discovering a hidden city with ancient magic. - text: Write a letter from a Renaissance artist to a modern art student about creativity and excellence. - text: Write a recipe for a futuristic dish for a space colony with exotic ingredients and innovative cooking methods. --- 🌈✨ **Welcoming the Dawn of Dialogue: CustomGPT2Conversational** 🌟🗨️ Prepare to embark on an odyssey through the landscapes of digital discourse with our meticulously crafted model, **CustomGPT2Conversational**. Born from the synthesis of advanced AI and the art of conversation, this model is your gateway to explorations in the realm of limitless dialogues. Let's dive into what makes **CustomGPT2Conversational** not just a model, but a revolution in conversational AI. 🎭 **Distinctive Elements**: - 💫 **Engagement Unleashed**: Craft conversations that flow with unparalleled grace, tailored to keep the discourse vibrant and context-aware. - 📘 **Conversational Mastery**: Refined through the crucible of nuanced dialogues, it stands as a beacon of natural interaction. - ⚡ **Technological Zenith**: Harnessing avant-garde AI, it stands at the frontier of conversational excellence, setting new benchmarks. 🛠️ **Architectural Marvels**: - 🏛 **Blueprints of Ingenuity**: At its core, the GPT2LMHeadModel architecture, endowed with 24 transformative layers, a hidden chamber of 2048 units, and the vigil of 16 attention sentinels. - 🌀 **The Dance of Dropouts**: A ballet of balance with a 0.1 leitmotif for attention, embedding, and residuals, ensuring each step is perfectly poised. - 🎶 **Harmony of Activation**: The melody of GELU (Gaussian Error Linear Unit) resonates through its structure, enabling a fluid symphony of responses. 🌐 **Configurations of Curiosity**: - 📜 **Script of Specificity**: Tailored task parameters set the stage for a performance of early cessation, nuanced penalties, and the strategic beam search, elevating conversational craft. - 🕰️ **Adaptability in Time**: A chameleon in the digital domain, it adjusts its hues to match the evolving tapestry of dialogue demands. - 🌍 **A Polyglot’s Promise**: While it begins its journey with the eloquence of English, it harbors the potential for multilingual dialogue, embracing the global tapestry of tongues. 🏷️ **Identification**: - 🌌 **Through the Conversational Cosmos**: Tagged with the essence of conversation, it invites you to navigate the nebulae of dialogue. - 🗂️ **Data Delve**: The archives of the LDJnr/Capybara dataset serve as its crucible, where it was forged in the flames of rich dialogues. - 📖 **Library Lore**: Anchored in the hallowed halls of the Transformers library, version 4.37.2, it is a testament to its lineage. - 🔐 **Scrolls of Sharing**: Under the banner of Apache-2.0, it extends an open invitation to innovate, adapt, and share. Embark with **CustomGPT2Conversational** as your compass in the quest for conversational enlightenment. Witness the dawn of a new era in dialogue, where every interaction is an adventure, every conversation a journey. 🚀🌌✨ --- 🌟💪 **Example Output: Unleashing Your Potential** 💪🌟 --- Write a motivational speech encouraging people to pursue their passions and overcome obstacles. And one more for the fun of it! Here are some inspirational quotes I have been asked about, so here they go… :) :)'s words :) The great thing is that all you need now is your determination. Whether this be by choosing not-yet chosen interests (I don't mean reading books), or if pursuing them leads us into becoming truly fulfilled with life...it's up in our hands NOW AND ONLY THEN DO YOU NEED THE STRENGTH TO LEARN HOW - WHY THIS IS RIGHT FOR YOUR LIFE!' As long as we continue onward towards these goals – through perseverance rather than striving after success every minute of each day — WE WILL NEVER BE PUSHED DOWN IN OUR PATH, OR GIVE UP ON WHAT WAS EARLY TAKEN from HUMANS … We must strive at ANYTHING; even what seems difficult today may help shape who "is" tomorrow." It might seem like an obvious statement but no matter how strong someone thinks themselves can get, chances were there wasn\'t much left when things got tough back then.. No wonder why such dreams always end before seeing out reality.
[ "CRAFT" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) CustomGPT2Conversational - bnb 4bits - Model creator: https://huggingface.co/ayjays132/ - Original model: https://huggingface.co/ayjays132/CustomGPT2Conversational/ Original model description: --- _name_or_path: CustomGPT2ConversationalModel torch_dtype: float32 transformers_version: 4.37.2 language: en license: apache-2.0 metrics: - perplexity - accuracy widget: - text: Write a story about a time-traveling detective in Elizabethan England. - text: Write a poem in the style of Pablo Neruda about the night sky. - text: Write a press release about a new technology for real-time language translation. - text: Write a conversation between a human and an alien meeting in space. - text: Write an essay about the impact of social media on society from a 22nd-century perspective. - text: Write a speech for the first human on Mars to address the global audience on Earth. - text: Write a story about children discovering a hidden city with ancient magic. - text: Write a letter from a Renaissance artist to a modern art student about creativity and excellence. - text: Write a recipe for a futuristic dish for a space colony with exotic ingredients and innovative cooking methods. --- 🌈✨ **Welcoming the Dawn of Dialogue: CustomGPT2Conversational** 🌟🗨️ Prepare to embark on an odyssey through the landscapes of digital discourse with our meticulously crafted model, **CustomGPT2Conversational**. Born from the synthesis of advanced AI and the art of conversation, this model is your gateway to explorations in the realm of limitless dialogues. Let's dive into what makes **CustomGPT2Conversational** not just a model, but a revolution in conversational AI. 🎭 **Distinctive Elements**: - 💫 **Engagement Unleashed**: Craft conversations that flow with unparalleled grace, tailored to keep the discourse vibrant and context-aware. - 📘 **Conversational Mastery**: Refined through the crucible of nuanced dialogues, it stands as a beacon of natural interaction. - ⚡ **Technological Zenith**: Harnessing avant-garde AI, it stands at the frontier of conversational excellence, setting new benchmarks. 🛠️ **Architectural Marvels**: - 🏛 **Blueprints of Ingenuity**: At its core, the GPT2LMHeadModel architecture, endowed with 24 transformative layers, a hidden chamber of 2048 units, and the vigil of 16 attention sentinels. - 🌀 **The Dance of Dropouts**: A ballet of balance with a 0.1 leitmotif for attention, embedding, and residuals, ensuring each step is perfectly poised. - 🎶 **Harmony of Activation**: The melody of GELU (Gaussian Error Linear Unit) resonates through its structure, enabling a fluid symphony of responses. 🌐 **Configurations of Curiosity**: - 📜 **Script of Specificity**: Tailored task parameters set the stage for a performance of early cessation, nuanced penalties, and the strategic beam search, elevating conversational craft. - 🕰️ **Adaptability in Time**: A chameleon in the digital domain, it adjusts its hues to match the evolving tapestry of dialogue demands. - 🌍 **A Polyglot’s Promise**: While it begins its journey with the eloquence of English, it harbors the potential for multilingual dialogue, embracing the global tapestry of tongues. 🏷️ **Identification**: - 🌌 **Through the Conversational Cosmos**: Tagged with the essence of conversation, it invites you to navigate the nebulae of dialogue. - 🗂️ **Data Delve**: The archives of the LDJnr/Capybara dataset serve as its crucible, where it was forged in the flames of rich dialogues. - 📖 **Library Lore**: Anchored in the hallowed halls of the Transformers library, version 4.37.2, it is a testament to its lineage. - 🔐 **Scrolls of Sharing**: Under the banner of Apache-2.0, it extends an open invitation to innovate, adapt, and share. Embark with **CustomGPT2Conversational** as your compass in the quest for conversational enlightenment. Witness the dawn of a new era in dialogue, where every interaction is an adventure, every conversation a journey. 🚀🌌✨ --- 🌟💪 **Example Output: Unleashing Your Potential** 💪🌟 --- Write a motivational speech encouraging people to pursue their passions and overcome obstacles. And one more for the fun of it! Here are some inspirational quotes I have been asked about, so here they go… :) :)'s words :) The great thing is that all you need now is your determination. Whether this be by choosing not-yet chosen interests (I don't mean reading books), or if pursuing them leads us into becoming truly fulfilled with life...it's up in our hands NOW AND ONLY THEN DO YOU NEED THE STRENGTH TO LEARN HOW - WHY THIS IS RIGHT FOR YOUR LIFE!' As long as we continue onward towards these goals – through perseverance rather than striving after success every minute of each day — WE WILL NEVER BE PUSHED DOWN IN OUR PATH, OR GIVE UP ON WHAT WAS EARLY TAKEN from HUMANS … We must strive at ANYTHING; even what seems difficult today may help shape who "is" tomorrow." It might seem like an obvious statement but no matter how strong someone thinks themselves can get, chances were there wasn\'t much left when things got tough back then.. No wonder why such dreams always end before seeing out reality.
{}
dataset
null
534
lightonai/modernbert-embed-large
lightonai
sentence-similarity
[ "sentence-transformers", "onnx", "safetensors", "modernbert", "feature-extraction", "sentence-similarity", "mteb", "transformers.js", "en", "arxiv:2402.01613", "arxiv:2412.13663", "base_model:answerdotai/ModernBERT-large", "base_model:quantized:answerdotai/ModernBERT-large", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-01-13T10:21:29Z
2025-01-14T10:02:51+00:00
3,361
20
--- base_model: - answerdotai/ModernBERT-large - lightonai/modernbert-embed-large-unsupervised language: - en license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - transformers.js model-index: - name: modernbert-embed-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: None config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.7910447761194 - type: ap value: 39.79562424828666 - type: f1 value: 70.69575548517653 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: None config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 94.19505000000001 - type: ap value: 91.75071069741077 - type: f1 value: 94.19151001437368 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: None config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.664 - type: f1 value: 46.932904638602466 - task: type: Retrieval dataset: name: MTEB ArguAna type: None config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 25.178 - type: map_at_10 value: 41.088 - type: map_at_100 value: 42.143 - type: map_at_1000 value: 42.152 - type: map_at_20 value: 41.946 - type: map_at_3 value: 36.048 - type: map_at_5 value: 38.619 - type: mrr_at_1 value: 25.533 - type: mrr_at_10 value: 41.238 - type: mrr_at_100 value: 42.293 - type: mrr_at_1000 value: 42.302 - type: mrr_at_20 value: 42.096000000000004 - type: mrr_at_3 value: 36.260999999999996 - type: mrr_at_5 value: 38.797 - type: ndcg_at_1 value: 25.178 - type: ndcg_at_10 value: 50.352 - type: ndcg_at_100 value: 54.583000000000006 - type: ndcg_at_1000 value: 54.797 - type: ndcg_at_20 value: 53.36 - type: ndcg_at_3 value: 39.781 - type: ndcg_at_5 value: 44.412 - type: precision_at_1 value: 25.178 - type: precision_at_10 value: 8.016 - type: precision_at_100 value: 0.98 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.591 - type: precision_at_3 value: 16.88 - type: precision_at_5 value: 12.376 - type: recall_at_1 value: 25.178 - type: recall_at_10 value: 80.156 - type: recall_at_100 value: 98.009 - type: recall_at_1000 value: 99.644 - type: recall_at_20 value: 91.821 - type: recall_at_3 value: 50.63999999999999 - type: recall_at_5 value: 61.878 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: None config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.800803622189214 - type: v_measures value: - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - 0.45978238220905315 - 0.501480083181185 - 0.48967239140474045 - 0.4751957557116818 - 0.46928677237487587 - 0.47135861735124435 - 0.4795286266157441 - 0.48441035326165754 - 0.47945476864912945 - 0.45912059930502597 - 0.5592448526471332 - 0.5674112737806248 - 0.5567224389492952 - 0.5541118789802117 - 0.570514423105391 - 0.5629670037938863 - 0.5615893409655635 - 0.5625434649173611 - 0.5565761783630462 - 0.5623718557128333 - 0.5210204606034864 - 0.2859950794098042 - 0.45504510640487766 - 0.4047776074746812 - 0.3535102351915281 - 0.28472692335289046 - 0.307070020692249 - 0.24530323287003208 - 0.3021496005249739 - 1.0 - 0.2753077950744492 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: None config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 39.46617889287484 - type: v_measures value: - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - 0.3893033531591254 - 0.37759008636346686 - 0.4009935123702046 - 0.4178331058752503 - 0.3649433015889931 - 0.38654663347660045 - 0.3986450977154441 - 0.3968923489520449 - 0.40313313179256627 - 0.4126496238026695 - 0.45374176499552477 - 0.46161782893366204 - 0.4570977042014734 - 0.4657228049179058 - 0.4591935076221456 - 0.4598535119202477 - 0.4582830838756286 - 0.45749858873241683 - 0.4544639331450374 - 0.45549406822102056 - 0.4234000416463061 - 0.2369850950367345 - 0.32010658770443073 - 0.35615139000924473 - 0.2892879335706423 - 0.2131268051282916 - 0.2509721947237842 - 0.15542440069786495 - 0.24428237711980852 - 1.0 - 0.21328163949266216 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: None config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.213756562834234 - type: mrr value: 76.76493866244557 - task: type: STS dataset: name: MTEB BIOSSES type: None config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.90977143141957 - type: cos_sim_spearman value: 87.47729443431557 - type: euclidean_pearson value: 86.45663786393041 - type: euclidean_spearman value: 86.31461733951959 - type: manhattan_pearson value: 85.94280510342506 - type: manhattan_spearman value: 85.61158927235539 - task: type: Classification dataset: name: MTEB Banking77Classification type: None config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.17857142857144 - type: f1 value: 86.14192410600847 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: None config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.466770895318334 - type: v_measures value: - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - 0.3849133523751956 - 0.3914994239029642 - 0.3795692975652329 - 0.39567466406296875 - 0.39744518295653425 - 0.4016581709951989 - 0.38473345446264967 - 0.40844969151861044 - 0.3935056530915587 - 0.40922819860092013 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: None config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 34.668146108668715 - type: v_measures value: - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - 0.34627871014268474 - 0.35947142706082674 - 0.35557599816049484 - 0.3313213383920607 - 0.33475049791707046 - 0.3464858366916894 - 0.34749918466307905 - 0.3459299753204718 - 0.35574882126513674 - 0.3437528212533572 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 34.394999999999996 - type: map_at_10 value: 45.882 - type: map_at_100 value: 47.4 - type: map_at_1000 value: 47.509 - type: map_at_20 value: 46.822 - type: map_at_3 value: 42.408 - type: map_at_5 value: 44.586 - type: mrr_at_1 value: 41.202 - type: mrr_at_10 value: 51.134 - type: mrr_at_100 value: 51.943 - type: mrr_at_1000 value: 51.986 - type: mrr_at_20 value: 51.717 - type: mrr_at_3 value: 48.784 - type: mrr_at_5 value: 50.336000000000006 - type: ndcg_at_1 value: 41.202 - type: ndcg_at_10 value: 51.842999999999996 - type: ndcg_at_100 value: 57.177 - type: ndcg_at_1000 value: 58.89 - type: ndcg_at_20 value: 54.357 - type: ndcg_at_3 value: 47.286 - type: ndcg_at_5 value: 49.829 - type: precision_at_1 value: 41.202 - type: precision_at_10 value: 9.585 - type: precision_at_100 value: 1.5150000000000001 - type: precision_at_1000 value: 0.194 - type: precision_at_20 value: 5.808 - type: precision_at_3 value: 22.508 - type: precision_at_5 value: 16.366 - type: recall_at_1 value: 34.394999999999996 - type: recall_at_10 value: 63.17 - type: recall_at_100 value: 84.867 - type: recall_at_1000 value: 95.733 - type: recall_at_20 value: 72.011 - type: recall_at_3 value: 49.966 - type: recall_at_5 value: 56.802 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: BeIR/cqadupstack config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 34.324 - type: map_at_10 value: 46.123 - type: map_at_100 value: 47.455999999999996 - type: map_at_1000 value: 47.576 - type: map_at_20 value: 46.851 - type: map_at_3 value: 42.945 - type: map_at_5 value: 44.751000000000005 - type: mrr_at_1 value: 43.248 - type: mrr_at_10 value: 52.544000000000004 - type: mrr_at_100 value: 53.102000000000004 - type: mrr_at_1000 value: 53.138 - type: mrr_at_20 value: 52.861000000000004 - type: mrr_at_3 value: 50.37199999999999 - type: mrr_at_5 value: 51.712 - type: ndcg_at_1 value: 43.248 - type: ndcg_at_10 value: 52.235 - type: ndcg_at_100 value: 56.355 - type: ndcg_at_1000 value: 58.053 - type: ndcg_at_20 value: 53.849000000000004 - type: ndcg_at_3 value: 48.208 - type: ndcg_at_5 value: 50.134 - type: precision_at_1 value: 43.248 - type: precision_at_10 value: 9.917 - type: precision_at_100 value: 1.532 - type: precision_at_1000 value: 0.198 - type: precision_at_20 value: 5.779999999999999 - type: precision_at_3 value: 23.588 - type: precision_at_5 value: 16.586000000000002 - type: recall_at_1 value: 34.324 - type: recall_at_10 value: 62.56 - type: recall_at_100 value: 79.745 - type: recall_at_1000 value: 90.082 - type: recall_at_20 value: 68.367 - type: recall_at_3 value: 50.171 - type: recall_at_5 value: 55.889 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: BeIR/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 44.143 - type: map_at_10 value: 56.53 - type: map_at_100 value: 57.48799999999999 - type: map_at_1000 value: 57.535000000000004 - type: map_at_20 value: 57.152 - type: map_at_3 value: 53.382 - type: map_at_5 value: 55.156000000000006 - type: mrr_at_1 value: 50.09400000000001 - type: mrr_at_10 value: 59.819 - type: mrr_at_100 value: 60.431000000000004 - type: mrr_at_1000 value: 60.455000000000005 - type: mrr_at_20 value: 60.251999999999995 - type: mrr_at_3 value: 57.544 - type: mrr_at_5 value: 58.904999999999994 - type: ndcg_at_1 value: 50.09400000000001 - type: ndcg_at_10 value: 62.141999999999996 - type: ndcg_at_100 value: 65.755 - type: ndcg_at_1000 value: 66.674 - type: ndcg_at_20 value: 63.92400000000001 - type: ndcg_at_3 value: 56.986000000000004 - type: ndcg_at_5 value: 59.519999999999996 - type: precision_at_1 value: 50.09400000000001 - type: precision_at_10 value: 9.743 - type: precision_at_100 value: 1.246 - type: precision_at_1000 value: 0.136 - type: precision_at_20 value: 5.439 - type: precision_at_3 value: 25.119999999999997 - type: precision_at_5 value: 17.052999999999997 - type: recall_at_1 value: 44.143 - type: recall_at_10 value: 75.372 - type: recall_at_100 value: 90.602 - type: recall_at_1000 value: 97.043 - type: recall_at_20 value: 81.83500000000001 - type: recall_at_3 value: 61.607 - type: recall_at_5 value: 67.755 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: BeIR/cqadupstack config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 26.621 - type: map_at_10 value: 35.865 - type: map_at_100 value: 36.93 - type: map_at_1000 value: 37.008 - type: map_at_20 value: 36.509 - type: map_at_3 value: 33.532000000000004 - type: map_at_5 value: 34.745 - type: mrr_at_1 value: 28.588 - type: mrr_at_10 value: 37.828 - type: mrr_at_100 value: 38.779 - type: mrr_at_1000 value: 38.834 - type: mrr_at_20 value: 38.419 - type: mrr_at_3 value: 35.725 - type: mrr_at_5 value: 36.803999999999995 - type: ndcg_at_1 value: 28.588 - type: ndcg_at_10 value: 40.983999999999995 - type: ndcg_at_100 value: 46.117000000000004 - type: ndcg_at_1000 value: 47.959 - type: ndcg_at_20 value: 43.22 - type: ndcg_at_3 value: 36.455 - type: ndcg_at_5 value: 38.393 - type: precision_at_1 value: 28.588 - type: precision_at_10 value: 6.282 - type: precision_at_100 value: 0.927 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_20 value: 3.689 - type: precision_at_3 value: 15.744 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 26.621 - type: recall_at_10 value: 54.80199999999999 - type: recall_at_100 value: 78.171 - type: recall_at_1000 value: 91.786 - type: recall_at_20 value: 63.195 - type: recall_at_3 value: 42.164 - type: recall_at_5 value: 46.936 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: BeIR/cqadupstack config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 18.619 - type: map_at_10 value: 27.577 - type: map_at_100 value: 28.717 - type: map_at_1000 value: 28.835 - type: map_at_20 value: 28.18 - type: map_at_3 value: 24.462999999999997 - type: map_at_5 value: 26.230999999999998 - type: mrr_at_1 value: 22.886 - type: mrr_at_10 value: 32.089 - type: mrr_at_100 value: 32.998 - type: mrr_at_1000 value: 33.06 - type: mrr_at_20 value: 32.633 - type: mrr_at_3 value: 29.125 - type: mrr_at_5 value: 30.792 - type: ndcg_at_1 value: 22.886 - type: ndcg_at_10 value: 33.343 - type: ndcg_at_100 value: 38.735 - type: ndcg_at_1000 value: 41.393 - type: ndcg_at_20 value: 35.455 - type: ndcg_at_3 value: 27.575 - type: ndcg_at_5 value: 30.361 - type: precision_at_1 value: 22.886 - type: precision_at_10 value: 6.256 - type: precision_at_100 value: 1.03 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_20 value: 3.7130000000000005 - type: precision_at_3 value: 13.267000000000001 - type: precision_at_5 value: 9.851 - type: recall_at_1 value: 18.619 - type: recall_at_10 value: 46.478 - type: recall_at_100 value: 69.614 - type: recall_at_1000 value: 88.331 - type: recall_at_20 value: 54.254000000000005 - type: recall_at_3 value: 30.897999999999996 - type: recall_at_5 value: 37.785000000000004 - type: map_at_1 value: 28.592666666666673 - type: map_at_10 value: 38.50391666666667 - type: map_at_100 value: 39.719166666666666 - type: map_at_1000 value: 39.82683333333334 - type: map_at_20 value: 39.18608333333333 - type: map_at_3 value: 35.561833333333325 - type: map_at_5 value: 37.181000000000004 - type: mrr_at_1 value: 33.67625 - type: mrr_at_10 value: 42.727 - type: mrr_at_100 value: 43.55041666666667 - type: mrr_at_1000 value: 43.60058333333334 - type: mrr_at_20 value: 43.21508333333333 - type: mrr_at_3 value: 40.32983333333334 - type: mrr_at_5 value: 41.699333333333335 - type: ndcg_at_1 value: 33.67625 - type: ndcg_at_10 value: 44.064416666666666 - type: ndcg_at_100 value: 49.085 - type: ndcg_at_1000 value: 51.09325 - type: ndcg_at_20 value: 46.07716666666666 - type: ndcg_at_3 value: 39.22225 - type: ndcg_at_5 value: 41.47508333333333 - type: precision_at_1 value: 33.67625 - type: precision_at_10 value: 7.689916666666667 - type: precision_at_100 value: 1.1995833333333334 - type: precision_at_1000 value: 0.15541666666666665 - type: precision_at_20 value: 4.515500000000001 - type: precision_at_3 value: 18.07241666666667 - type: precision_at_5 value: 12.732833333333332 - type: recall_at_1 value: 28.592666666666673 - type: recall_at_10 value: 56.15700000000001 - type: recall_at_100 value: 77.97075000000001 - type: recall_at_1000 value: 91.73058333333333 - type: recall_at_20 value: 63.49649999999999 - type: recall_at_3 value: 42.612833333333334 - type: recall_at_5 value: 48.44591666666667 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: BeIR/cqadupstack config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 31.087999999999997 - type: map_at_10 value: 41.668 - type: map_at_100 value: 42.983 - type: map_at_1000 value: 43.081 - type: map_at_20 value: 42.373 - type: map_at_3 value: 38.481 - type: map_at_5 value: 40.196 - type: mrr_at_1 value: 37.824999999999996 - type: mrr_at_10 value: 47.471999999999994 - type: mrr_at_100 value: 48.311 - type: mrr_at_1000 value: 48.351 - type: mrr_at_20 value: 47.981 - type: mrr_at_3 value: 45.074999999999996 - type: mrr_at_5 value: 46.37 - type: ndcg_at_1 value: 37.824999999999996 - type: ndcg_at_10 value: 47.63 - type: ndcg_at_100 value: 52.979 - type: ndcg_at_1000 value: 54.771 - type: ndcg_at_20 value: 49.733 - type: ndcg_at_3 value: 42.657000000000004 - type: ndcg_at_5 value: 44.878 - type: precision_at_1 value: 37.824999999999996 - type: precision_at_10 value: 8.527 - type: precision_at_100 value: 1.303 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_20 value: 4.966 - type: precision_at_3 value: 19.955000000000002 - type: precision_at_5 value: 14.033000000000001 - type: recall_at_1 value: 31.087999999999997 - type: recall_at_10 value: 59.585 - type: recall_at_100 value: 81.625 - type: recall_at_1000 value: 93.297 - type: recall_at_20 value: 66.813 - type: recall_at_3 value: 45.492 - type: recall_at_5 value: 51.283 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: BeIR/cqadupstack config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.756999999999998 - type: map_at_10 value: 40.275 - type: map_at_100 value: 41.655 - type: map_at_1000 value: 41.752 - type: map_at_20 value: 41.118 - type: map_at_3 value: 36.815 - type: map_at_5 value: 38.662 - type: mrr_at_1 value: 35.502 - type: mrr_at_10 value: 45.818 - type: mrr_at_100 value: 46.704 - type: mrr_at_1000 value: 46.745999999999995 - type: mrr_at_20 value: 46.387 - type: mrr_at_3 value: 43.322 - type: mrr_at_5 value: 44.675 - type: ndcg_at_1 value: 35.502 - type: ndcg_at_10 value: 46.658 - type: ndcg_at_100 value: 52.097 - type: ndcg_at_1000 value: 53.928 - type: ndcg_at_20 value: 49.134 - type: ndcg_at_3 value: 41.234 - type: ndcg_at_5 value: 43.579 - type: precision_at_1 value: 35.502 - type: precision_at_10 value: 8.652999999999999 - type: precision_at_100 value: 1.306 - type: precision_at_1000 value: 0.163 - type: precision_at_20 value: 5.086 - type: precision_at_3 value: 19.825 - type: precision_at_5 value: 13.995 - type: recall_at_1 value: 28.756999999999998 - type: recall_at_10 value: 59.79 - type: recall_at_100 value: 82.597 - type: recall_at_1000 value: 94.663 - type: recall_at_20 value: 68.74 - type: recall_at_3 value: 44.736 - type: recall_at_5 value: 51.047 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: BeIR/cqadupstack config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 25.381999999999998 - type: map_at_10 value: 33.311 - type: map_at_100 value: 34.171 - type: map_at_1000 value: 34.254 - type: map_at_20 value: 33.732 - type: map_at_3 value: 31.025999999999996 - type: map_at_5 value: 32.253 - type: mrr_at_1 value: 28.221 - type: mrr_at_10 value: 36.132999999999996 - type: mrr_at_100 value: 36.848 - type: mrr_at_1000 value: 36.902 - type: mrr_at_20 value: 36.497 - type: mrr_at_3 value: 33.947 - type: mrr_at_5 value: 35.174 - type: ndcg_at_1 value: 28.221 - type: ndcg_at_10 value: 37.882 - type: ndcg_at_100 value: 42.283 - type: ndcg_at_1000 value: 44.458 - type: ndcg_at_20 value: 39.268 - type: ndcg_at_3 value: 33.611999999999995 - type: ndcg_at_5 value: 35.583 - type: precision_at_1 value: 28.221 - type: precision_at_10 value: 6.043 - type: precision_at_100 value: 0.8909999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_20 value: 3.405 - type: precision_at_3 value: 14.673 - type: precision_at_5 value: 10.152999999999999 - type: recall_at_1 value: 25.381999999999998 - type: recall_at_10 value: 48.980000000000004 - type: recall_at_100 value: 69.625 - type: recall_at_1000 value: 85.946 - type: recall_at_20 value: 54.041 - type: recall_at_3 value: 37.077 - type: recall_at_5 value: 42.097 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: BeIR/cqadupstack config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 18.186 - type: map_at_10 value: 26.450000000000003 - type: map_at_100 value: 27.62 - type: map_at_1000 value: 27.746 - type: map_at_20 value: 27.105 - type: map_at_3 value: 23.982999999999997 - type: map_at_5 value: 25.306 - type: mrr_at_1 value: 22.092 - type: mrr_at_10 value: 30.326999999999998 - type: mrr_at_100 value: 31.322 - type: mrr_at_1000 value: 31.394 - type: mrr_at_20 value: 30.923000000000002 - type: mrr_at_3 value: 28.063 - type: mrr_at_5 value: 29.284 - type: ndcg_at_1 value: 22.092 - type: ndcg_at_10 value: 31.418000000000003 - type: ndcg_at_100 value: 36.924 - type: ndcg_at_1000 value: 39.645 - type: ndcg_at_20 value: 33.597 - type: ndcg_at_3 value: 27.045 - type: ndcg_at_5 value: 28.971999999999998 - type: precision_at_1 value: 22.092 - type: precision_at_10 value: 5.785 - type: precision_at_100 value: 0.989 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_20 value: 3.517 - type: precision_at_3 value: 12.985 - type: precision_at_5 value: 9.291 - type: recall_at_1 value: 18.186 - type: recall_at_10 value: 42.443 - type: recall_at_100 value: 66.964 - type: recall_at_1000 value: 86.005 - type: recall_at_20 value: 50.52799999999999 - type: recall_at_3 value: 30.095 - type: recall_at_5 value: 35.148 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: BeIR/cqadupstack config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 31.049 - type: map_at_10 value: 40.217000000000006 - type: map_at_100 value: 41.345 - type: map_at_1000 value: 41.447 - type: map_at_20 value: 40.818 - type: map_at_3 value: 37.413999999999994 - type: map_at_5 value: 39.001000000000005 - type: mrr_at_1 value: 36.474000000000004 - type: mrr_at_10 value: 44.655 - type: mrr_at_100 value: 45.399 - type: mrr_at_1000 value: 45.454 - type: mrr_at_20 value: 45.011 - type: mrr_at_3 value: 42.226 - type: mrr_at_5 value: 43.653999999999996 - type: ndcg_at_1 value: 36.474000000000004 - type: ndcg_at_10 value: 45.509 - type: ndcg_at_100 value: 50.571 - type: ndcg_at_1000 value: 52.605999999999995 - type: ndcg_at_20 value: 47.275 - type: ndcg_at_3 value: 40.766000000000005 - type: ndcg_at_5 value: 42.979 - type: precision_at_1 value: 36.474000000000004 - type: precision_at_10 value: 7.509 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_20 value: 4.3 - type: precision_at_3 value: 18.315 - type: precision_at_5 value: 12.705 - type: recall_at_1 value: 31.049 - type: recall_at_10 value: 57.135999999999996 - type: recall_at_100 value: 79.196 - type: recall_at_1000 value: 93.002 - type: recall_at_20 value: 63.416 - type: recall_at_3 value: 43.893 - type: recall_at_5 value: 49.675999999999995 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: BeIR/cqadupstack config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 27.929 - type: map_at_10 value: 36.897000000000006 - type: map_at_100 value: 38.635000000000005 - type: map_at_1000 value: 38.842 - type: map_at_20 value: 37.814 - type: map_at_3 value: 33.522 - type: map_at_5 value: 35.128 - type: mrr_at_1 value: 33.399 - type: mrr_at_10 value: 41.817 - type: mrr_at_100 value: 42.797000000000004 - type: mrr_at_1000 value: 42.842999999999996 - type: mrr_at_20 value: 42.381 - type: mrr_at_3 value: 38.999 - type: mrr_at_5 value: 40.57 - type: ndcg_at_1 value: 33.399 - type: ndcg_at_10 value: 43.134 - type: ndcg_at_100 value: 49.009 - type: ndcg_at_1000 value: 51.199 - type: ndcg_at_20 value: 45.391999999999996 - type: ndcg_at_3 value: 37.645 - type: ndcg_at_5 value: 39.940999999999995 - type: precision_at_1 value: 33.399 - type: precision_at_10 value: 8.36 - type: precision_at_100 value: 1.646 - type: precision_at_1000 value: 0.244 - type: precision_at_20 value: 5.257 - type: precision_at_3 value: 17.457 - type: precision_at_5 value: 12.727 - type: recall_at_1 value: 27.929 - type: recall_at_10 value: 54.822 - type: recall_at_100 value: 80.63900000000001 - type: recall_at_1000 value: 94.382 - type: recall_at_20 value: 63.432 - type: recall_at_3 value: 39.291 - type: recall_at_5 value: 45.385999999999996 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: BeIR/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 22.619 - type: map_at_10 value: 31.252000000000002 - type: map_at_100 value: 32.23 - type: map_at_1000 value: 32.336999999999996 - type: map_at_20 value: 31.758999999999997 - type: map_at_3 value: 28.771 - type: map_at_5 value: 30.157 - type: mrr_at_1 value: 24.584 - type: mrr_at_10 value: 33.088 - type: mrr_at_100 value: 33.971000000000004 - type: mrr_at_1000 value: 34.044000000000004 - type: mrr_at_20 value: 33.519 - type: mrr_at_3 value: 30.775999999999996 - type: mrr_at_5 value: 32.116 - type: ndcg_at_1 value: 24.584 - type: ndcg_at_10 value: 35.995 - type: ndcg_at_100 value: 41.018 - type: ndcg_at_1000 value: 43.543 - type: ndcg_at_20 value: 37.722 - type: ndcg_at_3 value: 31.197999999999997 - type: ndcg_at_5 value: 33.532000000000004 - type: precision_at_1 value: 24.584 - type: precision_at_10 value: 5.619 - type: precision_at_100 value: 0.878 - type: precision_at_1000 value: 0.121 - type: precision_at_20 value: 3.2259999999999995 - type: precision_at_3 value: 13.431999999999999 - type: precision_at_5 value: 9.39 - type: recall_at_1 value: 22.619 - type: recall_at_10 value: 48.746 - type: recall_at_100 value: 72.004 - type: recall_at_1000 value: 90.497 - type: recall_at_20 value: 55.326 - type: recall_at_3 value: 35.964 - type: recall_at_5 value: 41.547 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: None config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 16.493 - type: map_at_10 value: 28.988999999999997 - type: map_at_100 value: 30.964999999999996 - type: map_at_1000 value: 31.142999999999997 - type: map_at_20 value: 30.103 - type: map_at_3 value: 24.006 - type: map_at_5 value: 26.535999999999998 - type: mrr_at_1 value: 37.915 - type: mrr_at_10 value: 50.736000000000004 - type: mrr_at_100 value: 51.361999999999995 - type: mrr_at_1000 value: 51.388999999999996 - type: mrr_at_20 value: 51.148 - type: mrr_at_3 value: 47.589999999999996 - type: mrr_at_5 value: 49.55 - type: ndcg_at_1 value: 37.915 - type: ndcg_at_10 value: 39.139 - type: ndcg_at_100 value: 45.993 - type: ndcg_at_1000 value: 48.861 - type: ndcg_at_20 value: 41.923 - type: ndcg_at_3 value: 32.491 - type: ndcg_at_5 value: 34.775 - type: precision_at_1 value: 37.915 - type: precision_at_10 value: 12.293 - type: precision_at_100 value: 1.9709999999999999 - type: precision_at_1000 value: 0.251 - type: precision_at_20 value: 7.3389999999999995 - type: precision_at_3 value: 24.407999999999998 - type: precision_at_5 value: 18.775 - type: recall_at_1 value: 16.493 - type: recall_at_10 value: 45.904 - type: recall_at_100 value: 69.037 - type: recall_at_1000 value: 84.815 - type: recall_at_20 value: 53.657 - type: recall_at_3 value: 29.629 - type: recall_at_5 value: 36.325 - task: type: Retrieval dataset: name: MTEB DBPedia type: None config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.180000000000001 - type: map_at_10 value: 20.714 - type: map_at_100 value: 28.801 - type: map_at_1000 value: 30.43 - type: map_at_20 value: 23.673 - type: map_at_3 value: 14.551 - type: map_at_5 value: 17.067 - type: mrr_at_1 value: 68.25 - type: mrr_at_10 value: 75.83 - type: mrr_at_100 value: 76.225 - type: mrr_at_1000 value: 76.232 - type: mrr_at_20 value: 76.14 - type: mrr_at_3 value: 74.375 - type: mrr_at_5 value: 75.225 - type: ndcg_at_1 value: 56.99999999999999 - type: ndcg_at_10 value: 43.071 - type: ndcg_at_100 value: 47.189 - type: ndcg_at_1000 value: 54.125 - type: ndcg_at_20 value: 42.111 - type: ndcg_at_3 value: 47.67 - type: ndcg_at_5 value: 44.983000000000004 - type: precision_at_1 value: 68.25 - type: precision_at_10 value: 34.599999999999994 - type: precision_at_100 value: 10.8 - type: precision_at_1000 value: 2.12 - type: precision_at_20 value: 25.7 - type: precision_at_3 value: 51.417 - type: precision_at_5 value: 43.85 - type: recall_at_1 value: 9.180000000000001 - type: recall_at_10 value: 26.212000000000003 - type: recall_at_100 value: 52.443 - type: recall_at_1000 value: 73.939 - type: recall_at_20 value: 33.101 - type: recall_at_3 value: 15.787999999999998 - type: recall_at_5 value: 19.691 - task: type: Classification dataset: name: MTEB EmotionClassification type: None config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 49.625 - type: f1 value: 44.48944228050152 - task: type: Retrieval dataset: name: MTEB FEVER type: None config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 76.773 - type: map_at_10 value: 85.175 - type: map_at_100 value: 85.353 - type: map_at_1000 value: 85.36500000000001 - type: map_at_20 value: 85.271 - type: map_at_3 value: 84.261 - type: map_at_5 value: 84.899 - type: mrr_at_1 value: 82.853 - type: mrr_at_10 value: 90.02 - type: mrr_at_100 value: 90.048 - type: mrr_at_1000 value: 90.048 - type: mrr_at_20 value: 90.039 - type: mrr_at_3 value: 89.51599999999999 - type: mrr_at_5 value: 89.92099999999999 - type: ndcg_at_1 value: 82.853 - type: ndcg_at_10 value: 88.75999999999999 - type: ndcg_at_100 value: 89.347 - type: ndcg_at_1000 value: 89.547 - type: ndcg_at_20 value: 88.994 - type: ndcg_at_3 value: 87.481 - type: ndcg_at_5 value: 88.31700000000001 - type: precision_at_1 value: 82.853 - type: precision_at_10 value: 10.519 - type: precision_at_100 value: 1.1039999999999999 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_20 value: 5.341 - type: precision_at_3 value: 33.323 - type: precision_at_5 value: 20.596999999999998 - type: recall_at_1 value: 76.773 - type: recall_at_10 value: 94.95700000000001 - type: recall_at_100 value: 97.167 - type: recall_at_1000 value: 98.354 - type: recall_at_20 value: 95.71 - type: recall_at_3 value: 91.47999999999999 - type: recall_at_5 value: 93.658 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: None config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 21.629 - type: map_at_10 value: 36.394 - type: map_at_100 value: 38.308 - type: map_at_1000 value: 38.478 - type: map_at_20 value: 37.425999999999995 - type: map_at_3 value: 31.971 - type: map_at_5 value: 34.5 - type: mrr_at_1 value: 44.599 - type: mrr_at_10 value: 53.369 - type: mrr_at_100 value: 54.06999999999999 - type: mrr_at_1000 value: 54.114 - type: mrr_at_20 value: 53.754999999999995 - type: mrr_at_3 value: 51.415 - type: mrr_at_5 value: 52.479 - type: ndcg_at_1 value: 44.599 - type: ndcg_at_10 value: 44.425 - type: ndcg_at_100 value: 51.036 - type: ndcg_at_1000 value: 53.806 - type: ndcg_at_20 value: 46.934 - type: ndcg_at_3 value: 41.287 - type: ndcg_at_5 value: 42.143 - type: precision_at_1 value: 44.599 - type: precision_at_10 value: 12.222 - type: precision_at_100 value: 1.91 - type: precision_at_1000 value: 0.24 - type: precision_at_20 value: 7.176 - type: precision_at_3 value: 28.086 - type: precision_at_5 value: 20.369999999999997 - type: recall_at_1 value: 21.629 - type: recall_at_10 value: 51.168 - type: recall_at_100 value: 75.32600000000001 - type: recall_at_1000 value: 91.766 - type: recall_at_20 value: 58.923 - type: recall_at_3 value: 37.364999999999995 - type: recall_at_5 value: 43.322 - task: type: Retrieval dataset: name: MTEB HotpotQA type: None config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 42.336 - type: map_at_10 value: 59.602999999999994 - type: map_at_100 value: 60.367000000000004 - type: map_at_1000 value: 60.428000000000004 - type: map_at_20 value: 60.068 - type: map_at_3 value: 56.842000000000006 - type: map_at_5 value: 58.669000000000004 - type: mrr_at_1 value: 84.673 - type: mrr_at_10 value: 88.713 - type: mrr_at_100 value: 88.852 - type: mrr_at_1000 value: 88.857 - type: mrr_at_20 value: 88.806 - type: mrr_at_3 value: 88.202 - type: mrr_at_5 value: 88.522 - type: ndcg_at_1 value: 84.673 - type: ndcg_at_10 value: 68.67 - type: ndcg_at_100 value: 71.277 - type: ndcg_at_1000 value: 72.47 - type: ndcg_at_20 value: 69.797 - type: ndcg_at_3 value: 64.971 - type: ndcg_at_5 value: 67.16 - type: precision_at_1 value: 84.673 - type: precision_at_10 value: 13.66 - type: precision_at_100 value: 1.5699999999999998 - type: precision_at_1000 value: 0.173 - type: precision_at_20 value: 7.19 - type: precision_at_3 value: 40.135 - type: precision_at_5 value: 25.81 - type: recall_at_1 value: 42.336 - type: recall_at_10 value: 68.298 - type: recall_at_100 value: 78.494 - type: recall_at_1000 value: 86.435 - type: recall_at_20 value: 71.904 - type: recall_at_3 value: 60.202999999999996 - type: recall_at_5 value: 64.524 - task: type: Classification dataset: name: MTEB ImdbClassification type: None config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 89.0388 - type: ap value: 84.768407855227 - type: f1 value: 89.00848365810504 - task: type: Retrieval dataset: name: MTEB MSMARCO type: None config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 22.676 - type: map_at_10 value: 35.476 - type: map_at_100 value: 36.669000000000004 - type: map_at_1000 value: 36.714999999999996 - type: map_at_20 value: 36.253 - type: map_at_3 value: 31.430000000000003 - type: map_at_5 value: 33.891 - type: mrr_at_1 value: 23.281 - type: mrr_at_10 value: 35.994 - type: mrr_at_100 value: 37.128 - type: mrr_at_1000 value: 37.169000000000004 - type: mrr_at_20 value: 36.735 - type: mrr_at_3 value: 32.025 - type: mrr_at_5 value: 34.43 - type: ndcg_at_1 value: 23.281 - type: ndcg_at_10 value: 42.548 - type: ndcg_at_100 value: 48.138999999999996 - type: ndcg_at_1000 value: 49.26 - type: ndcg_at_20 value: 45.29 - type: ndcg_at_3 value: 34.414 - type: ndcg_at_5 value: 38.775999999999996 - type: precision_at_1 value: 23.281 - type: precision_at_10 value: 6.721000000000001 - type: precision_at_100 value: 0.9490000000000001 - type: precision_at_1000 value: 0.105 - type: precision_at_20 value: 3.93 - type: precision_at_3 value: 14.67 - type: precision_at_5 value: 11.003 - type: recall_at_1 value: 22.676 - type: recall_at_10 value: 64.33 - type: recall_at_100 value: 89.836 - type: recall_at_1000 value: 98.346 - type: recall_at_20 value: 74.958 - type: recall_at_3 value: 42.437000000000005 - type: recall_at_5 value: 52.89 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: None config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.26493388052896 - type: f1 value: 93.09322316606121 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: None config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.26356589147285 - type: f1 value: 62.91191113045691 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: None config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 75.4034969737727 - type: f1 value: 73.26712703676112 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: None config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.55749831876263 - type: f1 value: 78.59077417507389 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: None config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.39782367001404 - type: v_measures value: - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - 0.32448893901437725 - 0.3361996312847464 - 0.33908138638635865 - 0.3271187384761059 - 0.33377012095364167 - 0.36905559994096754 - 0.34390086433027045 - 0.360820016295285 - 0.3654168102809745 - 0.33993026003867693 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: None config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.630415762081864 - type: v_measures value: - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - 0.3036701988106334 - 0.2933155184673828 - 0.3026750733434484 - 0.3058243831740207 - 0.31157295468997015 - 0.3365172382225082 - 0.32195157464369284 - 0.332537268880845 - 0.33592713523868506 - 0.31905023073699995 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: None config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.989924085485676 - type: mrr value: 31.985114880107695 - task: type: Retrieval dataset: name: MTEB NFCorpus type: None config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 5.771 - type: map_at_10 value: 13.008000000000001 - type: map_at_100 value: 16.125999999999998 - type: map_at_1000 value: 17.482 - type: map_at_20 value: 14.324 - type: map_at_3 value: 9.69 - type: map_at_5 value: 11.174000000000001 - type: mrr_at_1 value: 45.201 - type: mrr_at_10 value: 53.989 - type: mrr_at_100 value: 54.50899999999999 - type: mrr_at_1000 value: 54.551 - type: mrr_at_20 value: 54.247 - type: mrr_at_3 value: 52.373999999999995 - type: mrr_at_5 value: 53.225 - type: ndcg_at_1 value: 43.808 - type: ndcg_at_10 value: 34.757 - type: ndcg_at_100 value: 31.174000000000003 - type: ndcg_at_1000 value: 39.607 - type: ndcg_at_20 value: 32.151999999999994 - type: ndcg_at_3 value: 40.458 - type: ndcg_at_5 value: 38.06 - type: precision_at_1 value: 45.201 - type: precision_at_10 value: 25.728 - type: precision_at_100 value: 7.82 - type: precision_at_1000 value: 2.032 - type: precision_at_20 value: 18.793000000000003 - type: precision_at_3 value: 38.080000000000005 - type: precision_at_5 value: 32.879000000000005 - type: recall_at_1 value: 5.771 - type: recall_at_10 value: 16.567 - type: recall_at_100 value: 30.447999999999997 - type: recall_at_1000 value: 60.941 - type: recall_at_20 value: 20.092 - type: recall_at_3 value: 10.928 - type: recall_at_5 value: 13.235 - task: type: Retrieval dataset: name: MTEB NQ type: None config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 40.716 - type: map_at_10 value: 56.599999999999994 - type: map_at_100 value: 57.389 - type: map_at_1000 value: 57.408 - type: map_at_20 value: 57.154 - type: map_at_3 value: 52.577 - type: map_at_5 value: 55.076 - type: mrr_at_1 value: 45.655 - type: mrr_at_10 value: 59.014 - type: mrr_at_100 value: 59.568 - type: mrr_at_1000 value: 59.580999999999996 - type: mrr_at_20 value: 59.41499999999999 - type: mrr_at_3 value: 55.88999999999999 - type: mrr_at_5 value: 57.879999999999995 - type: ndcg_at_1 value: 45.626 - type: ndcg_at_10 value: 63.778 - type: ndcg_at_100 value: 66.905 - type: ndcg_at_1000 value: 67.322 - type: ndcg_at_20 value: 65.521 - type: ndcg_at_3 value: 56.494 - type: ndcg_at_5 value: 60.553999999999995 - type: precision_at_1 value: 45.626 - type: precision_at_10 value: 9.942 - type: precision_at_100 value: 1.169 - type: precision_at_1000 value: 0.121 - type: precision_at_20 value: 5.390000000000001 - type: precision_at_3 value: 25.135 - type: precision_at_5 value: 17.451 - type: recall_at_1 value: 40.716 - type: recall_at_10 value: 82.998 - type: recall_at_100 value: 96.236 - type: recall_at_1000 value: 99.31400000000001 - type: recall_at_20 value: 89.402 - type: recall_at_3 value: 64.47699999999999 - type: recall_at_5 value: 73.774 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: None config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: map_at_1 value: 71.679 - type: map_at_10 value: 85.63 - type: map_at_100 value: 86.24000000000001 - type: map_at_1000 value: 86.25500000000001 - type: map_at_20 value: 86.03 - type: map_at_3 value: 82.712 - type: map_at_5 value: 84.59400000000001 - type: mrr_at_1 value: 82.58 - type: mrr_at_10 value: 88.459 - type: mrr_at_100 value: 88.544 - type: mrr_at_1000 value: 88.545 - type: mrr_at_20 value: 88.521 - type: mrr_at_3 value: 87.548 - type: mrr_at_5 value: 88.19 - type: ndcg_at_1 value: 82.57 - type: ndcg_at_10 value: 89.205 - type: ndcg_at_100 value: 90.316 - type: ndcg_at_1000 value: 90.4 - type: ndcg_at_20 value: 89.802 - type: ndcg_at_3 value: 86.5 - type: ndcg_at_5 value: 88.06 - type: precision_at_1 value: 82.57 - type: precision_at_10 value: 13.511000000000001 - type: precision_at_100 value: 1.532 - type: precision_at_1000 value: 0.157 - type: precision_at_20 value: 7.1499999999999995 - type: precision_at_3 value: 37.82 - type: precision_at_5 value: 24.892 - type: recall_at_1 value: 71.679 - type: recall_at_10 value: 95.926 - type: recall_at_100 value: 99.653 - type: recall_at_1000 value: 99.99 - type: recall_at_20 value: 97.81 - type: recall_at_3 value: 88.124 - type: recall_at_5 value: 92.535 - task: type: Clustering dataset: name: MTEB RedditClustering type: None config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 58.980204279295776 - type: v_measures value: - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - 0.6451280716471475 - 0.645063311327467 - 0.5315438986570028 - 0.5664946021472431 - 0.5738903466889544 - 0.5276869089101741 - 0.5904189978037212 - 0.5603608879042441 - 0.5568378389036701 - 0.5726233719767458 - 0.5477807586251173 - 0.5827708688105891 - 0.6065873110215666 - 0.6036471736485209 - 0.6912543733590332 - 0.5432313459217541 - 0.6228580641529852 - 0.6752678197786052 - 0.5716679708729834 - 0.5654059124001324 - 0.5454125044774013 - 0.5704289785620336 - 0.7083445261384431 - 0.5977444086270381 - 0.54260081746137 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: None config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 64.68385650734866 - type: v_measures value: - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - 0.6743650530639286 - 0.7047206687156294 - 0.6557778331932691 - 0.4282825632651972 - 0.7434812486386112 - 0.6326865724662851 - 0.4058629298732522 - 0.7451456136425593 - 0.715316547891375 - 0.7627466199847608 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: None config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: map_at_1 value: 4.1930000000000005 - type: map_at_10 value: 10.993 - type: map_at_100 value: 12.821 - type: map_at_1000 value: 13.094 - type: map_at_20 value: 11.899999999999999 - type: map_at_3 value: 7.753 - type: map_at_5 value: 9.479 - type: mrr_at_1 value: 20.7 - type: mrr_at_10 value: 31.776 - type: mrr_at_100 value: 32.863 - type: mrr_at_1000 value: 32.921 - type: mrr_at_20 value: 32.374 - type: mrr_at_3 value: 28.499999999999996 - type: mrr_at_5 value: 30.464999999999996 - type: ndcg_at_1 value: 20.7 - type: ndcg_at_10 value: 18.602 - type: ndcg_at_100 value: 26.063 - type: ndcg_at_1000 value: 30.988 - type: ndcg_at_20 value: 21.124000000000002 - type: ndcg_at_3 value: 17.538999999999998 - type: ndcg_at_5 value: 15.604999999999999 - type: precision_at_1 value: 20.7 - type: precision_at_10 value: 9.69 - type: precision_at_100 value: 2.051 - type: precision_at_1000 value: 0.32299999999999995 - type: precision_at_20 value: 6.3 - type: precision_at_3 value: 16.567 - type: precision_at_5 value: 13.96 - type: recall_at_1 value: 4.1930000000000005 - type: recall_at_10 value: 19.618 - type: recall_at_100 value: 41.643 - type: recall_at_1000 value: 65.693 - type: recall_at_20 value: 25.562 - type: recall_at_3 value: 10.062999999999999 - type: recall_at_5 value: 14.127999999999998 - task: type: STS dataset: name: MTEB SICK-R type: None config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 83.46613174654865 - type: cos_sim_spearman value: 80.3049357832415 - type: euclidean_pearson value: 81.26631332583317 - type: euclidean_spearman value: 80.3154745166346 - type: manhattan_pearson value: 81.14703159845031 - type: manhattan_spearman value: 80.20912001232311 - task: type: STS dataset: name: MTEB STS12 type: None config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.54049067032975 - type: cos_sim_spearman value: 80.96545866938635 - type: euclidean_pearson value: 83.96265705630466 - type: euclidean_spearman value: 79.93146623957664 - type: manhattan_pearson value: 83.90680327172007 - type: manhattan_spearman value: 79.9387741861374 - task: type: STS dataset: name: MTEB STS13 type: None config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 86.88551701212096 - type: cos_sim_spearman value: 87.86522961782607 - type: euclidean_pearson value: 87.36290945594213 - type: euclidean_spearman value: 87.83062393537139 - type: manhattan_pearson value: 87.32544594269082 - type: manhattan_spearman value: 87.81556963071229 - task: type: STS dataset: name: MTEB STS14 type: None config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.30880458174929 - type: cos_sim_spearman value: 83.80166079353091 - type: euclidean_pearson value: 85.32128873266257 - type: euclidean_spearman value: 83.86251092262333 - type: manhattan_pearson value: 85.2712567451151 - type: manhattan_spearman value: 83.80950203378747 - task: type: STS dataset: name: MTEB STS15 type: None config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.26254668067915 - type: cos_sim_spearman value: 88.58702965856746 - type: euclidean_pearson value: 87.9969808017743 - type: euclidean_spearman value: 88.48082129802832 - type: manhattan_pearson value: 88.005385920726 - type: manhattan_spearman value: 88.48824252319064 - task: type: STS dataset: name: MTEB STS16 type: None config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.9048844772477 - type: cos_sim_spearman value: 86.81864160521327 - type: euclidean_pearson value: 86.28264402848413 - type: euclidean_spearman value: 86.78000025418731 - type: manhattan_pearson value: 86.2441248990138 - type: manhattan_spearman value: 86.75021285222047 - task: type: STS dataset: name: MTEB STS17 (en-en) type: None config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.489340312079 - type: cos_sim_spearman value: 87.98810146323362 - type: euclidean_pearson value: 89.17657344753519 - type: euclidean_spearman value: 88.96877394433339 - type: manhattan_pearson value: 89.17489837230771 - type: manhattan_spearman value: 88.87394331518345 - task: type: STS dataset: name: MTEB STS22 (en) type: None config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 63.020191114515576 - type: cos_sim_spearman value: 66.81821028889179 - type: euclidean_pearson value: 66.11102477309004 - type: euclidean_spearman value: 66.59000262767655 - type: manhattan_pearson value: 66.0319349852117 - type: manhattan_spearman value: 66.51366211903893 - task: type: STS dataset: name: MTEB STSBenchmark type: None config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.05763458617234 - type: cos_sim_spearman value: 87.40353901525121 - type: euclidean_pearson value: 87.43632331678887 - type: euclidean_spearman value: 87.58631222421829 - type: manhattan_pearson value: 87.40408795218912 - type: manhattan_spearman value: 87.55530395433567 - task: type: Reranking dataset: name: MTEB SciDocsRR type: None config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 83.40728647106346 - type: mrr value: 95.39606725881237 - task: type: Retrieval dataset: name: MTEB SciFact type: None config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 55.344 - type: map_at_10 value: 66.467 - type: map_at_100 value: 66.841 - type: map_at_1000 value: 66.86800000000001 - type: map_at_20 value: 66.728 - type: map_at_3 value: 62.888 - type: map_at_5 value: 65.10000000000001 - type: mrr_at_1 value: 58.333 - type: mrr_at_10 value: 67.471 - type: mrr_at_100 value: 67.75 - type: mrr_at_1000 value: 67.778 - type: mrr_at_20 value: 67.649 - type: mrr_at_3 value: 64.72200000000001 - type: mrr_at_5 value: 66.539 - type: ndcg_at_1 value: 58.333 - type: ndcg_at_10 value: 71.707 - type: ndcg_at_100 value: 73.301 - type: ndcg_at_1000 value: 74.053 - type: ndcg_at_20 value: 72.482 - type: ndcg_at_3 value: 65.561 - type: ndcg_at_5 value: 69.017 - type: precision_at_1 value: 58.333 - type: precision_at_10 value: 9.866999999999999 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_20 value: 5.1 - type: precision_at_3 value: 25.778000000000002 - type: precision_at_5 value: 17.533 - type: recall_at_1 value: 55.344 - type: recall_at_10 value: 86.76700000000001 - type: recall_at_100 value: 94.0 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 89.60000000000001 - type: recall_at_3 value: 70.406 - type: recall_at_5 value: 79.106 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: None config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.71089108910891 - type: cos_sim_ap value: 91.82444380538519 - type: cos_sim_f1 value: 85.34525583705911 - type: cos_sim_precision value: 84.79763079960513 - type: cos_sim_recall value: 85.9 - type: dot_accuracy value: 99.56039603960396 - type: dot_ap value: 84.71022538609428 - type: dot_f1 value: 76.18100447538538 - type: dot_precision value: 75.76656775469831 - type: dot_recall value: 76.6 - type: euclidean_accuracy value: 99.7 - type: euclidean_ap value: 91.68317023504792 - type: euclidean_f1 value: 84.65712876171682 - type: euclidean_precision value: 83.54430379746836 - type: euclidean_recall value: 85.8 - type: manhattan_accuracy value: 99.69900990099009 - type: manhattan_ap value: 91.5749511659937 - type: manhattan_f1 value: 84.6989141164857 - type: manhattan_precision value: 83.62573099415205 - type: manhattan_recall value: 85.8 - type: max_accuracy value: 99.71089108910891 - type: max_ap value: 91.82444380538519 - type: max_f1 value: 85.34525583705911 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: None config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 69.36504474977566 - type: v_measures value: - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - 0.7576989668086949 - 0.6941673973105086 - 0.5999199814586392 - 0.7009392860118014 - 0.6911146596911227 - 0.646390143058745 - 0.6442231726625358 - 0.7502350275519825 - 0.6869636659371134 - 0.6952444700037437 - 0.763079972153315 - 0.7984807201827683 - 0.8009864921302298 - 0.7022376752256222 - 0.6419780898814442 - 0.6918573402523567 - 0.660312536947917 - 0.6546073550319798 - 0.6686135632697091 - 0.6651974389583027 - 0.6923843269406074 - 0.6833654799568836 - 0.6633431494438509 - 0.7062277792579976 - 0.6816924973160465 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: None config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.72911995025639 - type: v_measures value: - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - 0.3304415914259876 - 0.34135448340648167 - 0.339706731244524 - 0.33071893172291084 - 0.3317995254408912 - 0.3738836068336685 - 0.35451479317768203 - 0.3555924499674302 - 0.3592757088728364 - 0.3556241729332264 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: None config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.975020393803675 - type: mrr value: 53.87404772515067 - task: type: Summarization dataset: name: MTEB SummEval type: None config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.205065693047615 - type: cos_sim_spearman value: 28.307951294409406 - type: dot_pearson value: 29.15581947828465 - type: dot_spearman value: 28.222470759389505 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: None config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: map_at_1 value: 0.249 - type: map_at_10 value: 2.243 - type: map_at_100 value: 13.791 - type: map_at_1000 value: 32.539 - type: map_at_20 value: 4.112 - type: map_at_3 value: 0.7060000000000001 - type: map_at_5 value: 1.1860000000000002 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_20 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: ndcg_at_1 value: 92.0 - type: ndcg_at_10 value: 86.083 - type: ndcg_at_100 value: 66.471 - type: ndcg_at_1000 value: 57.31699999999999 - type: ndcg_at_20 value: 82.783 - type: ndcg_at_3 value: 88.805 - type: ndcg_at_5 value: 88.96 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 91.2 - type: precision_at_100 value: 68.16 - type: precision_at_1000 value: 25.290000000000003 - type: precision_at_20 value: 86.9 - type: precision_at_3 value: 94.0 - type: precision_at_5 value: 94.39999999999999 - type: recall_at_1 value: 0.249 - type: recall_at_10 value: 2.3800000000000003 - type: recall_at_100 value: 16.45 - type: recall_at_1000 value: 53.1 - type: recall_at_20 value: 4.4670000000000005 - type: recall_at_3 value: 0.734 - type: recall_at_5 value: 1.246 - task: type: Retrieval dataset: name: MTEB Touche2020 type: None config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 3.2520000000000002 - type: map_at_10 value: 11.805 - type: map_at_100 value: 18.749 - type: map_at_1000 value: 20.416999999999998 - type: map_at_20 value: 14.685 - type: map_at_3 value: 6.6739999999999995 - type: map_at_5 value: 8.863 - type: mrr_at_1 value: 42.857 - type: mrr_at_10 value: 57.635999999999996 - type: mrr_at_100 value: 58.034 - type: mrr_at_1000 value: 58.048 - type: mrr_at_20 value: 57.979 - type: mrr_at_3 value: 54.422000000000004 - type: mrr_at_5 value: 56.15599999999999 - type: ndcg_at_1 value: 39.796 - type: ndcg_at_10 value: 30.263 - type: ndcg_at_100 value: 40.825 - type: ndcg_at_1000 value: 52.447 - type: ndcg_at_20 value: 30.453000000000003 - type: ndcg_at_3 value: 35.086 - type: ndcg_at_5 value: 31.947 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 26.327 - type: precision_at_100 value: 8.041 - type: precision_at_1000 value: 1.582 - type: precision_at_20 value: 19.592000000000002 - type: precision_at_3 value: 36.054 - type: precision_at_5 value: 31.019999999999996 - type: recall_at_1 value: 3.2520000000000002 - type: recall_at_10 value: 18.471 - type: recall_at_100 value: 49.08 - type: recall_at_1000 value: 84.733 - type: recall_at_20 value: 26.389000000000003 - type: recall_at_3 value: 8.051 - type: recall_at_5 value: 11.672 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: None config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 68.10546875 - type: ap value: 12.899352291322325 - type: f1 value: 52.14484661172115 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: None config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 62.323146576117715 - type: f1 value: 62.6518883448989 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: None config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.261957327618525 - type: v_measures value: - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - 0.4873375900729135 - 0.5129229336124553 - 0.515681357542704 - 0.511464496088557 - 0.5090884385457786 - 0.5125351055552001 - 0.5124982980752528 - 0.517332919326808 - 0.5232255784709567 - 0.5241090154712252 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: None config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.09542826488645 - type: cos_sim_ap value: 77.72170475021885 - type: cos_sim_f1 value: 70.67669172932331 - type: cos_sim_precision value: 64.5238614077141 - type: cos_sim_recall value: 78.12664907651715 - type: dot_accuracy value: 83.96614412588663 - type: dot_ap value: 68.08590796036842 - type: dot_f1 value: 63.934426229508205 - type: dot_precision value: 58.854860186418115 - type: dot_recall value: 69.9736147757256 - type: euclidean_accuracy value: 87.20271800679502 - type: euclidean_ap value: 77.87533191263717 - type: euclidean_f1 value: 70.92216475337455 - type: euclidean_precision value: 67.94778825235677 - type: euclidean_recall value: 74.1688654353562 - type: manhattan_accuracy value: 87.20867854801216 - type: manhattan_ap value: 77.84249032925085 - type: manhattan_f1 value: 71.11665626949471 - type: manhattan_precision value: 67.45562130177515 - type: manhattan_recall value: 75.19788918205805 - type: max_accuracy value: 87.20867854801216 - type: max_ap value: 77.87533191263717 - type: max_f1 value: 71.11665626949471 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: None config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.22070865836147 - type: cos_sim_ap value: 86.38617271379728 - type: cos_sim_f1 value: 78.946594085626 - type: cos_sim_precision value: 75.5774647887324 - type: cos_sim_recall value: 82.63012011087157 - type: dot_accuracy value: 87.16963558039352 - type: dot_ap value: 82.0965358395614 - type: dot_f1 value: 75.00997859138575 - type: dot_precision value: 70.93541966920596 - type: dot_recall value: 79.58115183246073 - type: euclidean_accuracy value: 89.14891139830016 - type: euclidean_ap value: 86.28000880804873 - type: euclidean_f1 value: 78.7341306347746 - type: euclidean_precision value: 75.40706280397546 - type: euclidean_recall value: 82.36834000615954 - type: manhattan_accuracy value: 89.15279233127644 - type: manhattan_ap value: 86.25024653784152 - type: manhattan_f1 value: 78.72760457406788 - type: manhattan_precision value: 76.25369795800563 - type: manhattan_recall value: 81.36741607637819 - type: max_accuracy value: 89.22070865836147 - type: max_ap value: 86.38617271379728 - type: max_f1 value: 78.946594085626 --- # ModernBERT-embed-large ModernBERT-embed-large is an embedding model trained from [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large), bringing the new advances of ModernBERT to embeddings! Indeed, ModernBERT is a base model trained for Masked Language Modeling and can not directly be used to perform tasks such as retrieval without further fine-tuning. ModernBERT-embed-large is fine-tuned on the [Nomic Embed](https://arxiv.org/abs/2402.01613) weakly-supervised and supervised datasets and also supports Matryoshka Representation Learning dimensions of 256 to reduce memory with minimal performance loss. ## Performance | Model | Dimensions | Average (56) | Classification (12) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | |-----------------------|------------|--------------|---------------------|-----------------|-------------------------|---------------|----------------|-----------|------------------| | nomic-embed-text-v1.5 | 768 | 62.28 | 73.55 | 43.93 | 84.61 | 55.78 | 53.01 | 81.94 | 30.4 | | modernbert-embed-base | 768 | 62.62 | 74.31 | 44.98 | 83.96 | 56.42 | 52.89 | 81.78 | **31.39** | | modernbert-embed-large | 1024 | **63,84** | **75.03** | **46.04** | **85.31** | **57.64** | **54.36** | **83.80** | 28.31 | | nomic-embed-text-v1.5 | 256 | 61.04 | 72.1 | 43.16 | 84.09 | 55.18 | 50.81 | 81.34 | 30.05 | | modernbert-embed-base | 256 | 61.17 | 72.40 | 43.82 | 83.45 | 55.69 | 50.62 | 81.12 | 31.27 | | modernbert-embed-large | 256 | 62.43 | 73.60 | 44.59 | 84.89 | 57.08 | 51.72 | 83.46 | 29.03 | ## Usage You can use these models directly with the latest transformers release and requires installing `transformers>=4.48.0`: ```bash pip install transformers>=4.48.0 ``` Reminder, this model is trained similarly to Nomic Embed and **REQUIRES** prefixes to be added to the input. For more information, see the instructions in [Nomic Embed](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5#task-instruction-prefixes). Most use cases, adding `search_query: ` to the query and `search_document: ` to the documents will be sufficient. ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("lightonai/modernbert-embed-large") query_embeddings = model.encode([ "search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?", ]) doc_embeddings = model.encode([ "search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten", ]) print(query_embeddings.shape, doc_embeddings.shape) # (2, 1024) (1, 1024) similarities = model.similarity(query_embeddings, doc_embeddings) print(similarities) # tensor([[0.6518], # [0.4237]]) ``` <details><summary>Click to see Sentence Transformers usage with Matryoshka Truncation</summary> In Sentence Transformers, you can truncate embeddings to a smaller dimension by using the `truncate_dim` parameter when loading the `SentenceTransformer` model. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("lightonai/modernbert-embed-large", truncate_dim=256) query_embeddings = model.encode([ "search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?", ]) doc_embeddings = model.encode([ "search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten", ]) print(query_embeddings.shape, doc_embeddings.shape) # (2, 256) (1, 256) similarities = model.similarity(query_embeddings, doc_embeddings) print(similarities) # tensor([[0.6835], # [0.3982]]) ``` Note the small differences compared to the full 1024-dimensional similarities. </details> ### Transformers ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = ( attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() ) return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp( input_mask_expanded.sum(1), min=1e-9 ) queries = ["search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?"] documents = ["search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten"] tokenizer = AutoTokenizer.from_pretrained("lightonai/modernbert-embed-large") model = AutoModel.from_pretrained("lightonai/modernbert-embed-large") encoded_queries = tokenizer(queries, padding=True, truncation=True, return_tensors="pt") encoded_documents = tokenizer(documents, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): queries_outputs = model(**encoded_queries) documents_outputs = model(**encoded_documents) query_embeddings = mean_pooling(queries_outputs, encoded_queries["attention_mask"]) query_embeddings = F.normalize(query_embeddings, p=2, dim=1) doc_embeddings = mean_pooling(documents_outputs, encoded_documents["attention_mask"]) doc_embeddings = F.normalize(doc_embeddings, p=2, dim=1) print(query_embeddings.shape, doc_embeddings.shape) # torch.Size([2, 1024]) torch.Size([1, 1024]) similarities = query_embeddings @ doc_embeddings.T print(similarities) # tensor([[0.6518], # [0.4237]]) ``` <details><summary>Click to see Transformers usage with Matryoshka Truncation</summary> In `transformers`, you can truncate embeddings to a smaller dimension by slicing the mean pooled embeddings, prior to normalization. ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = ( attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() ) return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp( input_mask_expanded.sum(1), min=1e-9 ) queries = ["search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?"] documents = ["search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten"] tokenizer = AutoTokenizer.from_pretrained(".") model = AutoModel.from_pretrained(".") truncate_dim = 256 encoded_queries = tokenizer(queries, padding=True, truncation=True, return_tensors="pt") encoded_documents = tokenizer(documents, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): queries_outputs = model(**encoded_queries) documents_outputs = model(**encoded_documents) query_embeddings = mean_pooling(queries_outputs, encoded_queries["attention_mask"]) query_embeddings = query_embeddings[:, :truncate_dim] query_embeddings = F.normalize(query_embeddings, p=2, dim=1) doc_embeddings = mean_pooling(documents_outputs, encoded_documents["attention_mask"]) doc_embeddings = doc_embeddings[:, :truncate_dim] doc_embeddings = F.normalize(doc_embeddings, p=2, dim=1) print(query_embeddings.shape, doc_embeddings.shape) # torch.Size([2, 256]) torch.Size([1, 256]) similarities = query_embeddings @ doc_embeddings.T print(similarities) # tensor([[0.6835], # [0.3982]]) ``` Note the small differences compared to the full 1024-dimensional similarities. </details> ### Transformers.js If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: ```bash npm i @huggingface/transformers ``` Then, you can compute embeddings as follows: ```javascript import { pipeline, matmul } from '@huggingface/transformers'; // Create a feature extraction pipeline const extractor = await pipeline( "feature-extraction", "lightonai/modernbert-embed-large", { dtype: "fp32" }, // Supported options: "fp32", "fp16", "q8", "q4", "q4f16" ); // Embed queries and documents const query_embeddings = await extractor([ "search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?", ], { pooling: "mean", normalize: true }, ); const doc_embeddings = await extractor([ "search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten", ], { pooling: "mean", normalize: true }, ); // Compute similarity scores const similarities = await matmul(query_embeddings, doc_embeddings.transpose(1, 0)); console.log(similarities.tolist()); ``` ## Training We train ModernBERT-embed-large using a multi-stage training pipeline. Starting from the pretrained [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) model, the first unsupervised contrastive stage trains on a dataset generated from weakly related text pairs, such as question-answer pairs from forums like StackExchange and Quora, title-body pairs from Amazon reviews, and summarizations from news articles. In the second finetuning stage, higher quality labeled datasets such as search queries and answers from web searches are leveraged. Data curation and hard-example mining is crucial in this stage. For more details, see the Nomic Embed [Technical Report](https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf) and corresponding [blog post](https://blog.nomic.ai/posts/nomic-embed-text-v1). Training data to train the models is released in its entirety. For more details, see the `contrastors` [repository](https://github.com/nomic-ai/contrastors) ## Acknowledgment We wanted to thank [Zach Nussbaum](https://huggingface.co/zpn) from [Nomic AI](https://huggingface.co/nomic-ai) for building and sharing the Nomic Embed recipe and tools and its support during the training of this model! The training has been run on Orange Business Cloud Avenue infrastructure. ## Citation If you find the model, dataset, or training code useful, please considering citing ModernBERT as well as Nomic Embed: ```bibtex @misc{modernbert, title={Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference}, author={Benjamin Warner and Antoine Chaffin and Benjamin Clavié and Orion Weller and Oskar Hallström and Said Taghadouini and Alexis Gallagher and Raja Biswas and Faisal Ladhak and Tom Aarsen and Nathan Cooper and Griffin Adams and Jeremy Howard and Iacopo Poli}, year={2024}, eprint={2412.13663}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2412.13663}, } ``` ```bibtex @misc{nussbaum2024nomic, title={Nomic Embed: Training a Reproducible Long Context Text Embedder}, author={Zach Nussbaum and John X. Morris and Brandon Duderstadt and Andriy Mulyar}, year={2024}, eprint={2402.01613}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` And if you want to cite this fine-tuning in particular, please use: ```bibtex @misc{ModernBERT-embed-large, title={ModernBERT-embed-large}, author={Chaffin, Antoine}, url={https://huggingface.co/lightonai/modernbert-embed-large}, year={2025} } ```
[ "BIOSSES", "SCIFACT" ]
TBD
# ModernBERT-embed-large ModernBERT-embed-large is an embedding model trained from [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large), bringing the new advances of ModernBERT to embeddings! Indeed, ModernBERT is a base model trained for Masked Language Modeling and can not directly be used to perform tasks such as retrieval without further fine-tuning. ModernBERT-embed-large is fine-tuned on the [Nomic Embed](https://arxiv.org/abs/2402.01613) weakly-supervised and supervised datasets and also supports Matryoshka Representation Learning dimensions of 256 to reduce memory with minimal performance loss. ## Performance | Model | Dimensions | Average (56) | Classification (12) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | |-----------------------|------------|--------------|---------------------|-----------------|-------------------------|---------------|----------------|-----------|------------------| | nomic-embed-text-v1.5 | 768 | 62.28 | 73.55 | 43.93 | 84.61 | 55.78 | 53.01 | 81.94 | 30.4 | | modernbert-embed-base | 768 | 62.62 | 74.31 | 44.98 | 83.96 | 56.42 | 52.89 | 81.78 | **31.39** | | modernbert-embed-large | 1024 | **63,84** | **75.03** | **46.04** | **85.31** | **57.64** | **54.36** | **83.80** | 28.31 | | nomic-embed-text-v1.5 | 256 | 61.04 | 72.1 | 43.16 | 84.09 | 55.18 | 50.81 | 81.34 | 30.05 | | modernbert-embed-base | 256 | 61.17 | 72.40 | 43.82 | 83.45 | 55.69 | 50.62 | 81.12 | 31.27 | | modernbert-embed-large | 256 | 62.43 | 73.60 | 44.59 | 84.89 | 57.08 | 51.72 | 83.46 | 29.03 | ## Usage You can use these models directly with the latest transformers release and requires installing `transformers>=4.48.0`: ```bash pip install transformers>=4.48.0 ``` Reminder, this model is trained similarly to Nomic Embed and **REQUIRES** prefixes to be added to the input. For more information, see the instructions in [Nomic Embed](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5#task-instruction-prefixes). Most use cases, adding `search_query: ` to the query and `search_document: ` to the documents will be sufficient. ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("lightonai/modernbert-embed-large") query_embeddings = model.encode([ "search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?", ]) doc_embeddings = model.encode([ "search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten", ]) print(query_embeddings.shape, doc_embeddings.shape) # (2, 1024) (1, 1024) similarities = model.similarity(query_embeddings, doc_embeddings) print(similarities) # tensor([[0.6518], # [0.4237]]) ``` <details><summary>Click to see Sentence Transformers usage with Matryoshka Truncation</summary> In Sentence Transformers, you can truncate embeddings to a smaller dimension by using the `truncate_dim` parameter when loading the `SentenceTransformer` model. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("lightonai/modernbert-embed-large", truncate_dim=256) query_embeddings = model.encode([ "search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?", ]) doc_embeddings = model.encode([ "search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten", ]) print(query_embeddings.shape, doc_embeddings.shape) # (2, 256) (1, 256) similarities = model.similarity(query_embeddings, doc_embeddings) print(similarities) # tensor([[0.6835], # [0.3982]]) ``` Note the small differences compared to the full 1024-dimensional similarities. </details> ### Transformers ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = ( attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() ) return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp( input_mask_expanded.sum(1), min=1e-9 ) queries = ["search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?"] documents = ["search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten"] tokenizer = AutoTokenizer.from_pretrained("lightonai/modernbert-embed-large") model = AutoModel.from_pretrained("lightonai/modernbert-embed-large") encoded_queries = tokenizer(queries, padding=True, truncation=True, return_tensors="pt") encoded_documents = tokenizer(documents, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): queries_outputs = model(**encoded_queries) documents_outputs = model(**encoded_documents) query_embeddings = mean_pooling(queries_outputs, encoded_queries["attention_mask"]) query_embeddings = F.normalize(query_embeddings, p=2, dim=1) doc_embeddings = mean_pooling(documents_outputs, encoded_documents["attention_mask"]) doc_embeddings = F.normalize(doc_embeddings, p=2, dim=1) print(query_embeddings.shape, doc_embeddings.shape) # torch.Size([2, 1024]) torch.Size([1, 1024]) similarities = query_embeddings @ doc_embeddings.T print(similarities) # tensor([[0.6518], # [0.4237]]) ``` <details><summary>Click to see Transformers usage with Matryoshka Truncation</summary> In `transformers`, you can truncate embeddings to a smaller dimension by slicing the mean pooled embeddings, prior to normalization. ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = ( attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() ) return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp( input_mask_expanded.sum(1), min=1e-9 ) queries = ["search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?"] documents = ["search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten"] tokenizer = AutoTokenizer.from_pretrained(".") model = AutoModel.from_pretrained(".") truncate_dim = 256 encoded_queries = tokenizer(queries, padding=True, truncation=True, return_tensors="pt") encoded_documents = tokenizer(documents, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): queries_outputs = model(**encoded_queries) documents_outputs = model(**encoded_documents) query_embeddings = mean_pooling(queries_outputs, encoded_queries["attention_mask"]) query_embeddings = query_embeddings[:, :truncate_dim] query_embeddings = F.normalize(query_embeddings, p=2, dim=1) doc_embeddings = mean_pooling(documents_outputs, encoded_documents["attention_mask"]) doc_embeddings = doc_embeddings[:, :truncate_dim] doc_embeddings = F.normalize(doc_embeddings, p=2, dim=1) print(query_embeddings.shape, doc_embeddings.shape) # torch.Size([2, 256]) torch.Size([1, 256]) similarities = query_embeddings @ doc_embeddings.T print(similarities) # tensor([[0.6835], # [0.3982]]) ``` Note the small differences compared to the full 1024-dimensional similarities. </details> ### Transformers.js If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: ```bash npm i @huggingface/transformers ``` Then, you can compute embeddings as follows: ```javascript import { pipeline, matmul } from '@huggingface/transformers'; // Create a feature extraction pipeline const extractor = await pipeline( "feature-extraction", "lightonai/modernbert-embed-large", { dtype: "fp32" }, // Supported options: "fp32", "fp16", "q8", "q4", "q4f16" ); // Embed queries and documents const query_embeddings = await extractor([ "search_query: What is TSNE?", "search_query: Who is Laurens van der Maaten?", ], { pooling: "mean", normalize: true }, ); const doc_embeddings = await extractor([ "search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten", ], { pooling: "mean", normalize: true }, ); // Compute similarity scores const similarities = await matmul(query_embeddings, doc_embeddings.transpose(1, 0)); console.log(similarities.tolist()); ``` ## Training We train ModernBERT-embed-large using a multi-stage training pipeline. Starting from the pretrained [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) model, the first unsupervised contrastive stage trains on a dataset generated from weakly related text pairs, such as question-answer pairs from forums like StackExchange and Quora, title-body pairs from Amazon reviews, and summarizations from news articles. In the second finetuning stage, higher quality labeled datasets such as search queries and answers from web searches are leveraged. Data curation and hard-example mining is crucial in this stage. For more details, see the Nomic Embed [Technical Report](https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf) and corresponding [blog post](https://blog.nomic.ai/posts/nomic-embed-text-v1). Training data to train the models is released in its entirety. For more details, see the `contrastors` [repository](https://github.com/nomic-ai/contrastors) ## Acknowledgment We wanted to thank [Zach Nussbaum](https://huggingface.co/zpn) from [Nomic AI](https://huggingface.co/nomic-ai) for building and sharing the Nomic Embed recipe and tools and its support during the training of this model! The training has been run on Orange Business Cloud Avenue infrastructure. ## Citation If you find the model, dataset, or training code useful, please considering citing ModernBERT as well as Nomic Embed: ```bibtex @misc{modernbert, title={Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference}, author={Benjamin Warner and Antoine Chaffin and Benjamin Clavié and Orion Weller and Oskar Hallström and Said Taghadouini and Alexis Gallagher and Raja Biswas and Faisal Ladhak and Tom Aarsen and Nathan Cooper and Griffin Adams and Jeremy Howard and Iacopo Poli}, year={2024}, eprint={2412.13663}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2412.13663}, } ``` ```bibtex @misc{nussbaum2024nomic, title={Nomic Embed: Training a Reproducible Long Context Text Embedder}, author={Zach Nussbaum and John X. Morris and Brandon Duderstadt and Andriy Mulyar}, year={2024}, eprint={2402.01613}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` And if you want to cite this fine-tuning in particular, please use: ```bibtex @misc{ModernBERT-embed-large, title={ModernBERT-embed-large}, author={Chaffin, Antoine}, url={https://huggingface.co/lightonai/modernbert-embed-large}, year={2025} } ```
{"base_model": ["answerdotai/ModernBERT-large", "lightonai/modernbert-embed-large-unsupervised"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "transformers.js"], "model-index": [{"name": "modernbert-embed-large", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "None", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 76.7910447761194}, {"type": "ap", "value": 39.79562424828666}, {"type": "f1", "value": 70.69575548517653}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "None", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 94.19505000000001}, {"type": "ap", "value": 91.75071069741077}, {"type": "f1", "value": 94.19151001437368}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "None", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 47.664}, {"type": "f1", "value": 46.932904638602466}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "None", "config": "default", "split": "test", "revision": "c22ab2a51041ffd869aaddef7af8d8215647e41a"}, "metrics": [{"type": "map_at_1", "value": 25.178}, {"type": "map_at_10", "value": 41.088}, {"type": "map_at_100", "value": 42.143}, {"type": "map_at_1000", "value": 42.152}, {"type": "map_at_20", "value": 41.946}, {"type": "map_at_3", "value": 36.048}, {"type": "map_at_5", "value": 38.619}, {"type": "mrr_at_1", "value": 25.533}, {"type": "mrr_at_10", "value": 41.238}, {"type": "mrr_at_100", "value": 42.293}, {"type": "mrr_at_1000", "value": 42.302}, {"type": "mrr_at_20", "value": 42.096000000000004}, {"type": "mrr_at_3", "value": 36.260999999999996}, {"type": "mrr_at_5", "value": 38.797}, {"type": "ndcg_at_1", "value": 25.178}, {"type": "ndcg_at_10", "value": 50.352}, {"type": "ndcg_at_100", "value": 54.583000000000006}, {"type": "ndcg_at_1000", "value": 54.797}, {"type": "ndcg_at_20", "value": 53.36}, {"type": "ndcg_at_3", "value": 39.781}, {"type": "ndcg_at_5", "value": 44.412}, {"type": "precision_at_1", "value": 25.178}, {"type": "precision_at_10", "value": 8.016}, {"type": "precision_at_100", "value": 0.98}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_20", "value": 4.591}, {"type": "precision_at_3", "value": 16.88}, {"type": "precision_at_5", "value": 12.376}, {"type": "recall_at_1", "value": 25.178}, {"type": "recall_at_10", "value": 80.156}, {"type": "recall_at_100", "value": 98.009}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_20", "value": 91.821}, {"type": "recall_at_3", "value": 50.63999999999999}, {"type": "recall_at_5", "value": 61.878}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "None", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 47.800803622189214}, {"type": "v_measures", "value": [0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492, 0.45978238220905315, 0.501480083181185, 0.48967239140474045, 0.4751957557116818, 0.46928677237487587, 0.47135861735124435, 0.4795286266157441, 0.48441035326165754, 0.47945476864912945, 0.45912059930502597, 0.5592448526471332, 0.5674112737806248, 0.5567224389492952, 0.5541118789802117, 0.570514423105391, 0.5629670037938863, 0.5615893409655635, 0.5625434649173611, 0.5565761783630462, 0.5623718557128333, 0.5210204606034864, 0.2859950794098042, 0.45504510640487766, 0.4047776074746812, 0.3535102351915281, 0.28472692335289046, 0.307070020692249, 0.24530323287003208, 0.3021496005249739, 1.0, 0.2753077950744492]}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "None", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 39.46617889287484}, {"type": "v_measures", "value": [0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216, 0.3893033531591254, 0.37759008636346686, 0.4009935123702046, 0.4178331058752503, 0.3649433015889931, 0.38654663347660045, 0.3986450977154441, 0.3968923489520449, 0.40313313179256627, 0.4126496238026695, 0.45374176499552477, 0.46161782893366204, 0.4570977042014734, 0.4657228049179058, 0.4591935076221456, 0.4598535119202477, 0.4582830838756286, 0.45749858873241683, 0.4544639331450374, 0.45549406822102056, 0.4234000416463061, 0.2369850950367345, 0.32010658770443073, 0.35615139000924473, 0.2892879335706423, 0.2131268051282916, 0.2509721947237842, 0.15542440069786495, 0.24428237711980852, 1.0, 0.21328163949266216]}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "None", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 63.213756562834234}, {"type": "mrr", "value": 76.76493866244557}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "None", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.90977143141957}, {"type": "cos_sim_spearman", "value": 87.47729443431557}, {"type": "euclidean_pearson", "value": 86.45663786393041}, {"type": "euclidean_spearman", "value": 86.31461733951959}, {"type": "manhattan_pearson", "value": 85.94280510342506}, {"type": "manhattan_spearman", "value": 85.61158927235539}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "None", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 86.17857142857144}, {"type": "f1", "value": 86.14192410600847}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "None", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 39.466770895318334}, {"type": "v_measures", "value": [0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013, 0.3849133523751956, 0.3914994239029642, 0.3795692975652329, 0.39567466406296875, 0.39744518295653425, 0.4016581709951989, 0.38473345446264967, 0.40844969151861044, 0.3935056530915587, 0.40922819860092013]}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "None", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 34.668146108668715}, {"type": "v_measures", "value": [0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572, 0.34627871014268474, 0.35947142706082674, 0.35557599816049484, 0.3313213383920607, 0.33475049791707046, 0.3464858366916894, 0.34749918466307905, 0.3459299753204718, 0.35574882126513674, 0.3437528212533572]}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "f46a197baaae43b4f621051089b82a364682dfeb"}, "metrics": [{"type": "map_at_1", "value": 34.394999999999996}, {"type": "map_at_10", "value": 45.882}, {"type": "map_at_100", "value": 47.4}, {"type": "map_at_1000", "value": 47.509}, {"type": "map_at_20", "value": 46.822}, {"type": "map_at_3", "value": 42.408}, {"type": "map_at_5", "value": 44.586}, {"type": "mrr_at_1", "value": 41.202}, {"type": "mrr_at_10", "value": 51.134}, {"type": "mrr_at_100", "value": 51.943}, {"type": "mrr_at_1000", "value": 51.986}, {"type": "mrr_at_20", "value": 51.717}, {"type": "mrr_at_3", "value": 48.784}, {"type": "mrr_at_5", "value": 50.336000000000006}, {"type": "ndcg_at_1", "value": 41.202}, {"type": "ndcg_at_10", "value": 51.842999999999996}, {"type": "ndcg_at_100", "value": 57.177}, {"type": "ndcg_at_1000", "value": 58.89}, {"type": "ndcg_at_20", "value": 54.357}, {"type": "ndcg_at_3", "value": 47.286}, {"type": "ndcg_at_5", "value": 49.829}, {"type": "precision_at_1", "value": 41.202}, {"type": "precision_at_10", "value": 9.585}, {"type": "precision_at_100", "value": 1.5150000000000001}, {"type": "precision_at_1000", "value": 0.194}, {"type": "precision_at_20", "value": 5.808}, {"type": "precision_at_3", "value": 22.508}, {"type": "precision_at_5", "value": 16.366}, {"type": "recall_at_1", "value": 34.394999999999996}, {"type": "recall_at_10", "value": 63.17}, {"type": "recall_at_100", "value": 84.867}, {"type": "recall_at_1000", "value": 95.733}, {"type": "recall_at_20", "value": 72.011}, {"type": "recall_at_3", "value": 49.966}, {"type": "recall_at_5", "value": 56.802}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackEnglishRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "ad9991cb51e31e31e430383c75ffb2885547b5f0"}, "metrics": [{"type": "map_at_1", "value": 34.324}, {"type": "map_at_10", "value": 46.123}, {"type": "map_at_100", "value": 47.455999999999996}, {"type": "map_at_1000", "value": 47.576}, {"type": "map_at_20", "value": 46.851}, {"type": "map_at_3", "value": 42.945}, {"type": "map_at_5", "value": 44.751000000000005}, {"type": "mrr_at_1", "value": 43.248}, {"type": "mrr_at_10", "value": 52.544000000000004}, {"type": "mrr_at_100", "value": 53.102000000000004}, {"type": "mrr_at_1000", "value": 53.138}, {"type": "mrr_at_20", "value": 52.861000000000004}, {"type": "mrr_at_3", "value": 50.37199999999999}, {"type": "mrr_at_5", "value": 51.712}, {"type": "ndcg_at_1", "value": 43.248}, {"type": "ndcg_at_10", "value": 52.235}, {"type": "ndcg_at_100", "value": 56.355}, {"type": "ndcg_at_1000", "value": 58.053}, {"type": "ndcg_at_20", "value": 53.849000000000004}, {"type": "ndcg_at_3", "value": 48.208}, {"type": "ndcg_at_5", "value": 50.134}, {"type": "precision_at_1", "value": 43.248}, {"type": "precision_at_10", "value": 9.917}, {"type": "precision_at_100", "value": 1.532}, {"type": "precision_at_1000", "value": 0.198}, {"type": "precision_at_20", "value": 5.779999999999999}, {"type": "precision_at_3", "value": 23.588}, {"type": "precision_at_5", "value": 16.586000000000002}, {"type": "recall_at_1", "value": 34.324}, {"type": "recall_at_10", "value": 62.56}, {"type": "recall_at_100", "value": 79.745}, {"type": "recall_at_1000", "value": 90.082}, {"type": "recall_at_20", "value": 68.367}, {"type": "recall_at_3", "value": 50.171}, {"type": "recall_at_5", "value": 55.889}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGamingRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "4885aa143210c98657558c04aaf3dc47cfb54340"}, "metrics": [{"type": "map_at_1", "value": 44.143}, {"type": "map_at_10", "value": 56.53}, {"type": "map_at_100", "value": 57.48799999999999}, {"type": "map_at_1000", "value": 57.535000000000004}, {"type": "map_at_20", "value": 57.152}, {"type": "map_at_3", "value": 53.382}, {"type": "map_at_5", "value": 55.156000000000006}, {"type": "mrr_at_1", "value": 50.09400000000001}, {"type": "mrr_at_10", "value": 59.819}, {"type": "mrr_at_100", "value": 60.431000000000004}, {"type": "mrr_at_1000", "value": 60.455000000000005}, {"type": "mrr_at_20", "value": 60.251999999999995}, {"type": "mrr_at_3", "value": 57.544}, {"type": "mrr_at_5", "value": 58.904999999999994}, {"type": "ndcg_at_1", "value": 50.09400000000001}, {"type": "ndcg_at_10", "value": 62.141999999999996}, {"type": "ndcg_at_100", "value": 65.755}, {"type": "ndcg_at_1000", "value": 66.674}, {"type": "ndcg_at_20", "value": 63.92400000000001}, {"type": "ndcg_at_3", "value": 56.986000000000004}, {"type": "ndcg_at_5", "value": 59.519999999999996}, {"type": "precision_at_1", "value": 50.09400000000001}, {"type": "precision_at_10", "value": 9.743}, {"type": "precision_at_100", "value": 1.246}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_20", "value": 5.439}, {"type": "precision_at_3", "value": 25.119999999999997}, {"type": "precision_at_5", "value": 17.052999999999997}, {"type": "recall_at_1", "value": 44.143}, {"type": "recall_at_10", "value": 75.372}, {"type": "recall_at_100", "value": 90.602}, {"type": "recall_at_1000", "value": 97.043}, {"type": "recall_at_20", "value": 81.83500000000001}, {"type": "recall_at_3", "value": 61.607}, {"type": "recall_at_5", "value": 67.755}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGisRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "5003b3064772da1887988e05400cf3806fe491f2"}, "metrics": [{"type": "map_at_1", "value": 26.621}, {"type": "map_at_10", "value": 35.865}, {"type": "map_at_100", "value": 36.93}, {"type": "map_at_1000", "value": 37.008}, {"type": "map_at_20", "value": 36.509}, {"type": "map_at_3", "value": 33.532000000000004}, {"type": "map_at_5", "value": 34.745}, {"type": "mrr_at_1", "value": 28.588}, {"type": "mrr_at_10", "value": 37.828}, {"type": "mrr_at_100", "value": 38.779}, {"type": "mrr_at_1000", "value": 38.834}, {"type": "mrr_at_20", "value": 38.419}, {"type": "mrr_at_3", "value": 35.725}, {"type": "mrr_at_5", "value": 36.803999999999995}, {"type": "ndcg_at_1", "value": 28.588}, {"type": "ndcg_at_10", "value": 40.983999999999995}, {"type": "ndcg_at_100", "value": 46.117000000000004}, {"type": "ndcg_at_1000", "value": 47.959}, {"type": "ndcg_at_20", "value": 43.22}, {"type": "ndcg_at_3", "value": 36.455}, {"type": "ndcg_at_5", "value": 38.393}, {"type": "precision_at_1", "value": 28.588}, {"type": "precision_at_10", "value": 6.282}, {"type": "precision_at_100", "value": 0.927}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_20", "value": 3.689}, {"type": "precision_at_3", "value": 15.744}, {"type": "precision_at_5", "value": 10.644}, {"type": "recall_at_1", "value": 26.621}, {"type": "recall_at_10", "value": 54.80199999999999}, {"type": "recall_at_100", "value": 78.171}, {"type": "recall_at_1000", "value": 91.786}, {"type": "recall_at_20", "value": 63.195}, {"type": "recall_at_3", "value": 42.164}, {"type": "recall_at_5", "value": 46.936}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackMathematicaRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "90fceea13679c63fe563ded68f3b6f06e50061de"}, "metrics": [{"type": "map_at_1", "value": 18.619}, {"type": "map_at_10", "value": 27.577}, {"type": "map_at_100", "value": 28.717}, {"type": "map_at_1000", "value": 28.835}, {"type": "map_at_20", "value": 28.18}, {"type": "map_at_3", "value": 24.462999999999997}, {"type": "map_at_5", "value": 26.230999999999998}, {"type": "mrr_at_1", "value": 22.886}, {"type": "mrr_at_10", "value": 32.089}, {"type": "mrr_at_100", "value": 32.998}, {"type": "mrr_at_1000", "value": 33.06}, {"type": "mrr_at_20", "value": 32.633}, {"type": "mrr_at_3", "value": 29.125}, {"type": "mrr_at_5", "value": 30.792}, {"type": "ndcg_at_1", "value": 22.886}, {"type": "ndcg_at_10", "value": 33.343}, {"type": "ndcg_at_100", "value": 38.735}, {"type": "ndcg_at_1000", "value": 41.393}, {"type": "ndcg_at_20", "value": 35.455}, {"type": "ndcg_at_3", "value": 27.575}, {"type": "ndcg_at_5", "value": 30.361}, {"type": "precision_at_1", "value": 22.886}, {"type": "precision_at_10", "value": 6.256}, {"type": "precision_at_100", "value": 1.03}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_20", "value": 3.7130000000000005}, {"type": "precision_at_3", "value": 13.267000000000001}, {"type": "precision_at_5", "value": 9.851}, {"type": "recall_at_1", "value": 18.619}, {"type": "recall_at_10", "value": 46.478}, {"type": "recall_at_100", "value": 69.614}, {"type": "recall_at_1000", "value": 88.331}, {"type": "recall_at_20", "value": 54.254000000000005}, {"type": "recall_at_3", "value": 30.897999999999996}, {"type": "recall_at_5", "value": 37.785000000000004}, {"type": "map_at_1", "value": 28.592666666666673}, {"type": "map_at_10", "value": 38.50391666666667}, {"type": "map_at_100", "value": 39.719166666666666}, {"type": "map_at_1000", "value": 39.82683333333334}, {"type": "map_at_20", "value": 39.18608333333333}, {"type": "map_at_3", "value": 35.561833333333325}, {"type": "map_at_5", "value": 37.181000000000004}, {"type": "mrr_at_1", "value": 33.67625}, {"type": "mrr_at_10", "value": 42.727}, {"type": "mrr_at_100", "value": 43.55041666666667}, {"type": "mrr_at_1000", "value": 43.60058333333334}, {"type": "mrr_at_20", "value": 43.21508333333333}, {"type": "mrr_at_3", "value": 40.32983333333334}, {"type": "mrr_at_5", "value": 41.699333333333335}, {"type": "ndcg_at_1", "value": 33.67625}, {"type": "ndcg_at_10", "value": 44.064416666666666}, {"type": "ndcg_at_100", "value": 49.085}, {"type": "ndcg_at_1000", "value": 51.09325}, {"type": "ndcg_at_20", "value": 46.07716666666666}, {"type": "ndcg_at_3", "value": 39.22225}, {"type": "ndcg_at_5", "value": 41.47508333333333}, {"type": "precision_at_1", "value": 33.67625}, {"type": "precision_at_10", "value": 7.689916666666667}, {"type": "precision_at_100", "value": 1.1995833333333334}, {"type": "precision_at_1000", "value": 0.15541666666666665}, {"type": "precision_at_20", "value": 4.515500000000001}, {"type": "precision_at_3", "value": 18.07241666666667}, {"type": "precision_at_5", "value": 12.732833333333332}, {"type": "recall_at_1", "value": 28.592666666666673}, {"type": "recall_at_10", "value": 56.15700000000001}, {"type": "recall_at_100", "value": 77.97075000000001}, {"type": "recall_at_1000", "value": 91.73058333333333}, {"type": "recall_at_20", "value": 63.49649999999999}, {"type": "recall_at_3", "value": 42.612833333333334}, {"type": "recall_at_5", "value": 48.44591666666667}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackPhysicsRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4"}, "metrics": [{"type": "map_at_1", "value": 31.087999999999997}, {"type": "map_at_10", "value": 41.668}, {"type": "map_at_100", "value": 42.983}, {"type": "map_at_1000", "value": 43.081}, {"type": "map_at_20", "value": 42.373}, {"type": "map_at_3", "value": 38.481}, {"type": "map_at_5", "value": 40.196}, {"type": "mrr_at_1", "value": 37.824999999999996}, {"type": "mrr_at_10", "value": 47.471999999999994}, {"type": "mrr_at_100", "value": 48.311}, {"type": "mrr_at_1000", "value": 48.351}, {"type": "mrr_at_20", "value": 47.981}, {"type": "mrr_at_3", "value": 45.074999999999996}, {"type": "mrr_at_5", "value": 46.37}, {"type": "ndcg_at_1", "value": 37.824999999999996}, {"type": "ndcg_at_10", "value": 47.63}, {"type": "ndcg_at_100", "value": 52.979}, {"type": "ndcg_at_1000", "value": 54.771}, {"type": "ndcg_at_20", "value": 49.733}, {"type": "ndcg_at_3", "value": 42.657000000000004}, {"type": "ndcg_at_5", "value": 44.878}, {"type": "precision_at_1", "value": 37.824999999999996}, {"type": "precision_at_10", "value": 8.527}, {"type": "precision_at_100", "value": 1.303}, {"type": "precision_at_1000", "value": 0.16199999999999998}, {"type": "precision_at_20", "value": 4.966}, {"type": "precision_at_3", "value": 19.955000000000002}, {"type": "precision_at_5", "value": 14.033000000000001}, {"type": "recall_at_1", "value": 31.087999999999997}, {"type": "recall_at_10", "value": 59.585}, {"type": "recall_at_100", "value": 81.625}, {"type": "recall_at_1000", "value": 93.297}, {"type": "recall_at_20", "value": 66.813}, {"type": "recall_at_3", "value": 45.492}, {"type": "recall_at_5", "value": 51.283}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackProgrammersRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "6184bc1440d2dbc7612be22b50686b8826d22b32"}, "metrics": [{"type": "map_at_1", "value": 28.756999999999998}, {"type": "map_at_10", "value": 40.275}, {"type": "map_at_100", "value": 41.655}, {"type": "map_at_1000", "value": 41.752}, {"type": "map_at_20", "value": 41.118}, {"type": "map_at_3", "value": 36.815}, {"type": "map_at_5", "value": 38.662}, {"type": "mrr_at_1", "value": 35.502}, {"type": "mrr_at_10", "value": 45.818}, {"type": "mrr_at_100", "value": 46.704}, {"type": "mrr_at_1000", "value": 46.745999999999995}, {"type": "mrr_at_20", "value": 46.387}, {"type": "mrr_at_3", "value": 43.322}, {"type": "mrr_at_5", "value": 44.675}, {"type": "ndcg_at_1", "value": 35.502}, {"type": "ndcg_at_10", "value": 46.658}, {"type": "ndcg_at_100", "value": 52.097}, {"type": "ndcg_at_1000", "value": 53.928}, {"type": "ndcg_at_20", "value": 49.134}, {"type": "ndcg_at_3", "value": 41.234}, {"type": "ndcg_at_5", "value": 43.579}, {"type": "precision_at_1", "value": 35.502}, {"type": "precision_at_10", "value": 8.652999999999999}, {"type": "precision_at_100", "value": 1.306}, {"type": "precision_at_1000", "value": 0.163}, {"type": "precision_at_20", "value": 5.086}, {"type": "precision_at_3", "value": 19.825}, {"type": "precision_at_5", "value": 13.995}, {"type": "recall_at_1", "value": 28.756999999999998}, {"type": "recall_at_10", "value": 59.79}, {"type": "recall_at_100", "value": 82.597}, {"type": "recall_at_1000", "value": 94.663}, {"type": "recall_at_20", "value": 68.74}, {"type": "recall_at_3", "value": 44.736}, {"type": "recall_at_5", "value": 51.047}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackStatsRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "65ac3a16b8e91f9cee4c9828cc7c335575432a2a"}, "metrics": [{"type": "map_at_1", "value": 25.381999999999998}, {"type": "map_at_10", "value": 33.311}, {"type": "map_at_100", "value": 34.171}, {"type": "map_at_1000", "value": 34.254}, {"type": "map_at_20", "value": 33.732}, {"type": "map_at_3", "value": 31.025999999999996}, {"type": "map_at_5", "value": 32.253}, {"type": "mrr_at_1", "value": 28.221}, {"type": "mrr_at_10", "value": 36.132999999999996}, {"type": "mrr_at_100", "value": 36.848}, {"type": "mrr_at_1000", "value": 36.902}, {"type": "mrr_at_20", "value": 36.497}, {"type": "mrr_at_3", "value": 33.947}, {"type": "mrr_at_5", "value": 35.174}, {"type": "ndcg_at_1", "value": 28.221}, {"type": "ndcg_at_10", "value": 37.882}, {"type": "ndcg_at_100", "value": 42.283}, {"type": "ndcg_at_1000", "value": 44.458}, {"type": "ndcg_at_20", "value": 39.268}, {"type": "ndcg_at_3", "value": 33.611999999999995}, {"type": "ndcg_at_5", "value": 35.583}, {"type": "precision_at_1", "value": 28.221}, {"type": "precision_at_10", "value": 6.043}, {"type": "precision_at_100", "value": 0.8909999999999999}, {"type": "precision_at_1000", "value": 0.11499999999999999}, {"type": "precision_at_20", "value": 3.405}, {"type": "precision_at_3", "value": 14.673}, {"type": "precision_at_5", "value": 10.152999999999999}, {"type": "recall_at_1", "value": 25.381999999999998}, {"type": "recall_at_10", "value": 48.980000000000004}, {"type": "recall_at_100", "value": 69.625}, {"type": "recall_at_1000", "value": 85.946}, {"type": "recall_at_20", "value": 54.041}, {"type": "recall_at_3", "value": 37.077}, {"type": "recall_at_5", "value": 42.097}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackTexRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "46989137a86843e03a6195de44b09deda022eec7"}, "metrics": [{"type": "map_at_1", "value": 18.186}, {"type": "map_at_10", "value": 26.450000000000003}, {"type": "map_at_100", "value": 27.62}, {"type": "map_at_1000", "value": 27.746}, {"type": "map_at_20", "value": 27.105}, {"type": "map_at_3", "value": 23.982999999999997}, {"type": "map_at_5", "value": 25.306}, {"type": "mrr_at_1", "value": 22.092}, {"type": "mrr_at_10", "value": 30.326999999999998}, {"type": "mrr_at_100", "value": 31.322}, {"type": "mrr_at_1000", "value": 31.394}, {"type": "mrr_at_20", "value": 30.923000000000002}, {"type": "mrr_at_3", "value": 28.063}, {"type": "mrr_at_5", "value": 29.284}, {"type": "ndcg_at_1", "value": 22.092}, {"type": "ndcg_at_10", "value": 31.418000000000003}, {"type": "ndcg_at_100", "value": 36.924}, {"type": "ndcg_at_1000", "value": 39.645}, {"type": "ndcg_at_20", "value": 33.597}, {"type": "ndcg_at_3", "value": 27.045}, {"type": "ndcg_at_5", "value": 28.971999999999998}, {"type": "precision_at_1", "value": 22.092}, {"type": "precision_at_10", "value": 5.785}, {"type": "precision_at_100", "value": 0.989}, {"type": "precision_at_1000", "value": 0.13999999999999999}, {"type": "precision_at_20", "value": 3.517}, {"type": "precision_at_3", "value": 12.985}, {"type": "precision_at_5", "value": 9.291}, {"type": "recall_at_1", "value": 18.186}, {"type": "recall_at_10", "value": 42.443}, {"type": "recall_at_100", "value": 66.964}, {"type": "recall_at_1000", "value": 86.005}, {"type": "recall_at_20", "value": 50.52799999999999}, {"type": "recall_at_3", "value": 30.095}, {"type": "recall_at_5", "value": 35.148}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackUnixRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "6c6430d3a6d36f8d2a829195bc5dc94d7e063e53"}, "metrics": [{"type": "map_at_1", "value": 31.049}, {"type": "map_at_10", "value": 40.217000000000006}, {"type": "map_at_100", "value": 41.345}, {"type": "map_at_1000", "value": 41.447}, {"type": "map_at_20", "value": 40.818}, {"type": "map_at_3", "value": 37.413999999999994}, {"type": "map_at_5", "value": 39.001000000000005}, {"type": "mrr_at_1", "value": 36.474000000000004}, {"type": "mrr_at_10", "value": 44.655}, {"type": "mrr_at_100", "value": 45.399}, {"type": "mrr_at_1000", "value": 45.454}, {"type": "mrr_at_20", "value": 45.011}, {"type": "mrr_at_3", "value": 42.226}, {"type": "mrr_at_5", "value": 43.653999999999996}, {"type": "ndcg_at_1", "value": 36.474000000000004}, {"type": "ndcg_at_10", "value": 45.509}, {"type": "ndcg_at_100", "value": 50.571}, {"type": "ndcg_at_1000", "value": 52.605999999999995}, {"type": "ndcg_at_20", "value": 47.275}, {"type": "ndcg_at_3", "value": 40.766000000000005}, {"type": "ndcg_at_5", "value": 42.979}, {"type": "precision_at_1", "value": 36.474000000000004}, {"type": "precision_at_10", "value": 7.509}, {"type": "precision_at_100", "value": 1.1320000000000001}, {"type": "precision_at_1000", "value": 0.14100000000000001}, {"type": "precision_at_20", "value": 4.3}, {"type": "precision_at_3", "value": 18.315}, {"type": "precision_at_5", "value": 12.705}, {"type": "recall_at_1", "value": 31.049}, {"type": "recall_at_10", "value": 57.135999999999996}, {"type": "recall_at_100", "value": 79.196}, {"type": "recall_at_1000", "value": 93.002}, {"type": "recall_at_20", "value": 63.416}, {"type": "recall_at_3", "value": 43.893}, {"type": "recall_at_5", "value": 49.675999999999995}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWebmastersRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "160c094312a0e1facb97e55eeddb698c0abe3571"}, "metrics": [{"type": "map_at_1", "value": 27.929}, {"type": "map_at_10", "value": 36.897000000000006}, {"type": "map_at_100", "value": 38.635000000000005}, {"type": "map_at_1000", "value": 38.842}, {"type": "map_at_20", "value": 37.814}, {"type": "map_at_3", "value": 33.522}, {"type": "map_at_5", "value": 35.128}, {"type": "mrr_at_1", "value": 33.399}, {"type": "mrr_at_10", "value": 41.817}, {"type": "mrr_at_100", "value": 42.797000000000004}, {"type": "mrr_at_1000", "value": 42.842999999999996}, {"type": "mrr_at_20", "value": 42.381}, {"type": "mrr_at_3", "value": 38.999}, {"type": "mrr_at_5", "value": 40.57}, {"type": "ndcg_at_1", "value": 33.399}, {"type": "ndcg_at_10", "value": 43.134}, {"type": "ndcg_at_100", "value": 49.009}, {"type": "ndcg_at_1000", "value": 51.199}, {"type": "ndcg_at_20", "value": 45.391999999999996}, {"type": "ndcg_at_3", "value": 37.645}, {"type": "ndcg_at_5", "value": 39.940999999999995}, {"type": "precision_at_1", "value": 33.399}, {"type": "precision_at_10", "value": 8.36}, {"type": "precision_at_100", "value": 1.646}, {"type": "precision_at_1000", "value": 0.244}, {"type": "precision_at_20", "value": 5.257}, {"type": "precision_at_3", "value": 17.457}, {"type": "precision_at_5", "value": 12.727}, {"type": "recall_at_1", "value": 27.929}, {"type": "recall_at_10", "value": 54.822}, {"type": "recall_at_100", "value": 80.63900000000001}, {"type": "recall_at_1000", "value": 94.382}, {"type": "recall_at_20", "value": 63.432}, {"type": "recall_at_3", "value": 39.291}, {"type": "recall_at_5", "value": 45.385999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWordpressRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "4ffe81d471b1924886b33c7567bfb200e9eec5c4"}, "metrics": [{"type": "map_at_1", "value": 22.619}, {"type": "map_at_10", "value": 31.252000000000002}, {"type": "map_at_100", "value": 32.23}, {"type": "map_at_1000", "value": 32.336999999999996}, {"type": "map_at_20", "value": 31.758999999999997}, {"type": "map_at_3", "value": 28.771}, {"type": "map_at_5", "value": 30.157}, {"type": "mrr_at_1", "value": 24.584}, {"type": "mrr_at_10", "value": 33.088}, {"type": "mrr_at_100", "value": 33.971000000000004}, {"type": "mrr_at_1000", "value": 34.044000000000004}, {"type": "mrr_at_20", "value": 33.519}, {"type": "mrr_at_3", "value": 30.775999999999996}, {"type": "mrr_at_5", "value": 32.116}, {"type": "ndcg_at_1", "value": 24.584}, {"type": "ndcg_at_10", "value": 35.995}, {"type": "ndcg_at_100", "value": 41.018}, {"type": "ndcg_at_1000", "value": 43.543}, {"type": "ndcg_at_20", "value": 37.722}, {"type": "ndcg_at_3", "value": 31.197999999999997}, {"type": "ndcg_at_5", "value": 33.532000000000004}, {"type": "precision_at_1", "value": 24.584}, {"type": "precision_at_10", "value": 5.619}, {"type": "precision_at_100", "value": 0.878}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_20", "value": 3.2259999999999995}, {"type": "precision_at_3", "value": 13.431999999999999}, {"type": "precision_at_5", "value": 9.39}, {"type": "recall_at_1", "value": 22.619}, {"type": "recall_at_10", "value": 48.746}, {"type": "recall_at_100", "value": 72.004}, {"type": "recall_at_1000", "value": 90.497}, {"type": "recall_at_20", "value": 55.326}, {"type": "recall_at_3", "value": 35.964}, {"type": "recall_at_5", "value": 41.547}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "None", "config": "default", "split": "test", "revision": "47f2ac6acb640fc46020b02a5b59fdda04d39380"}, "metrics": [{"type": "map_at_1", "value": 16.493}, {"type": "map_at_10", "value": 28.988999999999997}, {"type": "map_at_100", "value": 30.964999999999996}, {"type": "map_at_1000", "value": 31.142999999999997}, {"type": "map_at_20", "value": 30.103}, {"type": "map_at_3", "value": 24.006}, {"type": "map_at_5", "value": 26.535999999999998}, {"type": "mrr_at_1", "value": 37.915}, {"type": "mrr_at_10", "value": 50.736000000000004}, {"type": "mrr_at_100", "value": 51.361999999999995}, {"type": "mrr_at_1000", "value": 51.388999999999996}, {"type": "mrr_at_20", "value": 51.148}, {"type": "mrr_at_3", "value": 47.589999999999996}, {"type": "mrr_at_5", "value": 49.55}, {"type": "ndcg_at_1", "value": 37.915}, {"type": "ndcg_at_10", "value": 39.139}, {"type": "ndcg_at_100", "value": 45.993}, {"type": "ndcg_at_1000", "value": 48.861}, {"type": "ndcg_at_20", "value": 41.923}, {"type": "ndcg_at_3", "value": 32.491}, {"type": "ndcg_at_5", "value": 34.775}, {"type": "precision_at_1", "value": 37.915}, {"type": "precision_at_10", "value": 12.293}, {"type": "precision_at_100", "value": 1.9709999999999999}, {"type": "precision_at_1000", "value": 0.251}, {"type": "precision_at_20", "value": 7.3389999999999995}, {"type": "precision_at_3", "value": 24.407999999999998}, {"type": "precision_at_5", "value": 18.775}, {"type": "recall_at_1", "value": 16.493}, {"type": "recall_at_10", "value": 45.904}, {"type": "recall_at_100", "value": 69.037}, {"type": "recall_at_1000", "value": 84.815}, {"type": "recall_at_20", "value": 53.657}, {"type": "recall_at_3", "value": 29.629}, {"type": "recall_at_5", "value": 36.325}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "None", "config": "default", "split": "test", "revision": "c0f706b76e590d620bd6618b3ca8efdd34e2d659"}, "metrics": [{"type": "map_at_1", "value": 9.180000000000001}, {"type": "map_at_10", "value": 20.714}, {"type": "map_at_100", "value": 28.801}, {"type": "map_at_1000", "value": 30.43}, {"type": "map_at_20", "value": 23.673}, {"type": "map_at_3", "value": 14.551}, {"type": "map_at_5", "value": 17.067}, {"type": "mrr_at_1", "value": 68.25}, {"type": "mrr_at_10", "value": 75.83}, {"type": "mrr_at_100", "value": 76.225}, {"type": "mrr_at_1000", "value": 76.232}, {"type": "mrr_at_20", "value": 76.14}, {"type": "mrr_at_3", "value": 74.375}, {"type": "mrr_at_5", "value": 75.225}, {"type": "ndcg_at_1", "value": 56.99999999999999}, {"type": "ndcg_at_10", "value": 43.071}, {"type": "ndcg_at_100", "value": 47.189}, {"type": "ndcg_at_1000", "value": 54.125}, {"type": "ndcg_at_20", "value": 42.111}, {"type": "ndcg_at_3", "value": 47.67}, {"type": "ndcg_at_5", "value": 44.983000000000004}, {"type": "precision_at_1", "value": 68.25}, {"type": "precision_at_10", "value": 34.599999999999994}, {"type": "precision_at_100", "value": 10.8}, {"type": "precision_at_1000", "value": 2.12}, {"type": "precision_at_20", "value": 25.7}, {"type": "precision_at_3", "value": 51.417}, {"type": "precision_at_5", "value": 43.85}, {"type": "recall_at_1", "value": 9.180000000000001}, {"type": "recall_at_10", "value": 26.212000000000003}, {"type": "recall_at_100", "value": 52.443}, {"type": "recall_at_1000", "value": 73.939}, {"type": "recall_at_20", "value": 33.101}, {"type": "recall_at_3", "value": 15.787999999999998}, {"type": "recall_at_5", "value": 19.691}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "None", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 49.625}, {"type": "f1", "value": 44.48944228050152}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "None", "config": "default", "split": "test", "revision": "bea83ef9e8fb933d90a2f1d5515737465d613e12"}, "metrics": [{"type": "map_at_1", "value": 76.773}, {"type": "map_at_10", "value": 85.175}, {"type": "map_at_100", "value": 85.353}, {"type": "map_at_1000", "value": 85.36500000000001}, {"type": "map_at_20", "value": 85.271}, {"type": "map_at_3", "value": 84.261}, {"type": "map_at_5", "value": 84.899}, {"type": "mrr_at_1", "value": 82.853}, {"type": "mrr_at_10", "value": 90.02}, {"type": "mrr_at_100", "value": 90.048}, {"type": "mrr_at_1000", "value": 90.048}, {"type": "mrr_at_20", "value": 90.039}, {"type": "mrr_at_3", "value": 89.51599999999999}, {"type": "mrr_at_5", "value": 89.92099999999999}, {"type": "ndcg_at_1", "value": 82.853}, {"type": "ndcg_at_10", "value": 88.75999999999999}, {"type": "ndcg_at_100", "value": 89.347}, {"type": "ndcg_at_1000", "value": 89.547}, {"type": "ndcg_at_20", "value": 88.994}, {"type": "ndcg_at_3", "value": 87.481}, {"type": "ndcg_at_5", "value": 88.31700000000001}, {"type": "precision_at_1", "value": 82.853}, {"type": "precision_at_10", "value": 10.519}, {"type": "precision_at_100", "value": 1.1039999999999999}, {"type": "precision_at_1000", "value": 0.11399999999999999}, {"type": "precision_at_20", "value": 5.341}, {"type": "precision_at_3", "value": 33.323}, {"type": "precision_at_5", "value": 20.596999999999998}, {"type": "recall_at_1", "value": 76.773}, {"type": "recall_at_10", "value": 94.95700000000001}, {"type": "recall_at_100", "value": 97.167}, {"type": "recall_at_1000", "value": 98.354}, {"type": "recall_at_20", "value": 95.71}, {"type": "recall_at_3", "value": 91.47999999999999}, {"type": "recall_at_5", "value": 93.658}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "None", "config": "default", "split": "test", "revision": "27a168819829fe9bcd655c2df245fb19452e8e06"}, "metrics": [{"type": "map_at_1", "value": 21.629}, {"type": "map_at_10", "value": 36.394}, {"type": "map_at_100", "value": 38.308}, {"type": "map_at_1000", "value": 38.478}, {"type": "map_at_20", "value": 37.425999999999995}, {"type": "map_at_3", "value": 31.971}, {"type": "map_at_5", "value": 34.5}, {"type": "mrr_at_1", "value": 44.599}, {"type": "mrr_at_10", "value": 53.369}, {"type": "mrr_at_100", "value": 54.06999999999999}, {"type": "mrr_at_1000", "value": 54.114}, {"type": "mrr_at_20", "value": 53.754999999999995}, {"type": "mrr_at_3", "value": 51.415}, {"type": "mrr_at_5", "value": 52.479}, {"type": "ndcg_at_1", "value": 44.599}, {"type": "ndcg_at_10", "value": 44.425}, {"type": "ndcg_at_100", "value": 51.036}, {"type": "ndcg_at_1000", "value": 53.806}, {"type": "ndcg_at_20", "value": 46.934}, {"type": "ndcg_at_3", "value": 41.287}, {"type": "ndcg_at_5", "value": 42.143}, {"type": "precision_at_1", "value": 44.599}, {"type": "precision_at_10", "value": 12.222}, {"type": "precision_at_100", "value": 1.91}, {"type": "precision_at_1000", "value": 0.24}, {"type": "precision_at_20", "value": 7.176}, {"type": "precision_at_3", "value": 28.086}, {"type": "precision_at_5", "value": 20.369999999999997}, {"type": "recall_at_1", "value": 21.629}, {"type": "recall_at_10", "value": 51.168}, {"type": "recall_at_100", "value": 75.32600000000001}, {"type": "recall_at_1000", "value": 91.766}, {"type": "recall_at_20", "value": 58.923}, {"type": "recall_at_3", "value": 37.364999999999995}, {"type": "recall_at_5", "value": 43.322}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "None", "config": "default", "split": "test", "revision": "ab518f4d6fcca38d87c25209f94beba119d02014"}, "metrics": [{"type": "map_at_1", "value": 42.336}, {"type": "map_at_10", "value": 59.602999999999994}, {"type": "map_at_100", "value": 60.367000000000004}, {"type": "map_at_1000", "value": 60.428000000000004}, {"type": "map_at_20", "value": 60.068}, {"type": "map_at_3", "value": 56.842000000000006}, {"type": "map_at_5", "value": 58.669000000000004}, {"type": "mrr_at_1", "value": 84.673}, {"type": "mrr_at_10", "value": 88.713}, {"type": "mrr_at_100", "value": 88.852}, {"type": "mrr_at_1000", "value": 88.857}, {"type": "mrr_at_20", "value": 88.806}, {"type": "mrr_at_3", "value": 88.202}, {"type": "mrr_at_5", "value": 88.522}, {"type": "ndcg_at_1", "value": 84.673}, {"type": "ndcg_at_10", "value": 68.67}, {"type": "ndcg_at_100", "value": 71.277}, {"type": "ndcg_at_1000", "value": 72.47}, {"type": "ndcg_at_20", "value": 69.797}, {"type": "ndcg_at_3", "value": 64.971}, {"type": "ndcg_at_5", "value": 67.16}, {"type": "precision_at_1", "value": 84.673}, {"type": "precision_at_10", "value": 13.66}, {"type": "precision_at_100", "value": 1.5699999999999998}, {"type": "precision_at_1000", "value": 0.173}, {"type": "precision_at_20", "value": 7.19}, {"type": "precision_at_3", "value": 40.135}, {"type": "precision_at_5", "value": 25.81}, {"type": "recall_at_1", "value": 42.336}, {"type": "recall_at_10", "value": 68.298}, {"type": "recall_at_100", "value": 78.494}, {"type": "recall_at_1000", "value": 86.435}, {"type": "recall_at_20", "value": 71.904}, {"type": "recall_at_3", "value": 60.202999999999996}, {"type": "recall_at_5", "value": 64.524}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "None", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 89.0388}, {"type": "ap", "value": 84.768407855227}, {"type": "f1", "value": 89.00848365810504}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "None", "config": "default", "split": "dev", "revision": "c5a29a104738b98a9e76336939199e264163d4a0"}, "metrics": [{"type": "map_at_1", "value": 22.676}, {"type": "map_at_10", "value": 35.476}, {"type": "map_at_100", "value": 36.669000000000004}, {"type": "map_at_1000", "value": 36.714999999999996}, {"type": "map_at_20", "value": 36.253}, {"type": "map_at_3", "value": 31.430000000000003}, {"type": "map_at_5", "value": 33.891}, {"type": "mrr_at_1", "value": 23.281}, {"type": "mrr_at_10", "value": 35.994}, {"type": "mrr_at_100", "value": 37.128}, {"type": "mrr_at_1000", "value": 37.169000000000004}, {"type": "mrr_at_20", "value": 36.735}, {"type": "mrr_at_3", "value": 32.025}, {"type": "mrr_at_5", "value": 34.43}, {"type": "ndcg_at_1", "value": 23.281}, {"type": "ndcg_at_10", "value": 42.548}, {"type": "ndcg_at_100", "value": 48.138999999999996}, {"type": "ndcg_at_1000", "value": 49.26}, {"type": "ndcg_at_20", "value": 45.29}, {"type": "ndcg_at_3", "value": 34.414}, {"type": "ndcg_at_5", "value": 38.775999999999996}, {"type": "precision_at_1", "value": 23.281}, {"type": "precision_at_10", "value": 6.721000000000001}, {"type": "precision_at_100", "value": 0.9490000000000001}, {"type": "precision_at_1000", "value": 0.105}, {"type": "precision_at_20", "value": 3.93}, {"type": "precision_at_3", "value": 14.67}, {"type": "precision_at_5", "value": 11.003}, {"type": "recall_at_1", "value": 22.676}, {"type": "recall_at_10", "value": 64.33}, {"type": "recall_at_100", "value": 89.836}, {"type": "recall_at_1000", "value": 98.346}, {"type": "recall_at_20", "value": 74.958}, {"type": "recall_at_3", "value": 42.437000000000005}, {"type": "recall_at_5", "value": 52.89}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "None", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 93.26493388052896}, {"type": "f1", "value": 93.09322316606121}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "None", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 79.26356589147285}, {"type": "f1", "value": 62.91191113045691}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "None", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 75.4034969737727}, {"type": "f1", "value": 73.26712703676112}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "None", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 78.55749831876263}, {"type": "f1", "value": 78.59077417507389}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "None", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 34.39782367001404}, {"type": "v_measures", "value": [0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693, 0.32448893901437725, 0.3361996312847464, 0.33908138638635865, 0.3271187384761059, 0.33377012095364167, 0.36905559994096754, 0.34390086433027045, 0.360820016295285, 0.3654168102809745, 0.33993026003867693]}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "None", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 31.630415762081864}, {"type": "v_measures", "value": [0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995, 0.3036701988106334, 0.2933155184673828, 0.3026750733434484, 0.3058243831740207, 0.31157295468997015, 0.3365172382225082, 0.32195157464369284, 0.332537268880845, 0.33592713523868506, 0.31905023073699995]}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "None", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 30.989924085485676}, {"type": "mrr", "value": 31.985114880107695}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "None", "config": "default", "split": "test", "revision": "ec0fa4fe99da2ff19ca1214b7966684033a58814"}, "metrics": [{"type": "map_at_1", "value": 5.771}, {"type": "map_at_10", "value": 13.008000000000001}, {"type": "map_at_100", "value": 16.125999999999998}, {"type": "map_at_1000", "value": 17.482}, {"type": "map_at_20", "value": 14.324}, {"type": "map_at_3", "value": 9.69}, {"type": "map_at_5", "value": 11.174000000000001}, {"type": "mrr_at_1", "value": 45.201}, {"type": "mrr_at_10", "value": 53.989}, {"type": "mrr_at_100", "value": 54.50899999999999}, {"type": "mrr_at_1000", "value": 54.551}, {"type": "mrr_at_20", "value": 54.247}, {"type": "mrr_at_3", "value": 52.373999999999995}, {"type": "mrr_at_5", "value": 53.225}, {"type": "ndcg_at_1", "value": 43.808}, {"type": "ndcg_at_10", "value": 34.757}, {"type": "ndcg_at_100", "value": 31.174000000000003}, {"type": "ndcg_at_1000", "value": 39.607}, {"type": "ndcg_at_20", "value": 32.151999999999994}, {"type": "ndcg_at_3", "value": 40.458}, {"type": "ndcg_at_5", "value": 38.06}, {"type": "precision_at_1", "value": 45.201}, {"type": "precision_at_10", "value": 25.728}, {"type": "precision_at_100", "value": 7.82}, {"type": "precision_at_1000", "value": 2.032}, {"type": "precision_at_20", "value": 18.793000000000003}, {"type": "precision_at_3", "value": 38.080000000000005}, {"type": "precision_at_5", "value": 32.879000000000005}, {"type": "recall_at_1", "value": 5.771}, {"type": "recall_at_10", "value": 16.567}, {"type": "recall_at_100", "value": 30.447999999999997}, {"type": "recall_at_1000", "value": 60.941}, {"type": "recall_at_20", "value": 20.092}, {"type": "recall_at_3", "value": 10.928}, {"type": "recall_at_5", "value": 13.235}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "None", "config": "default", "split": "test", "revision": "b774495ed302d8c44a3a7ea25c90dbce03968f31"}, "metrics": [{"type": "map_at_1", "value": 40.716}, {"type": "map_at_10", "value": 56.599999999999994}, {"type": "map_at_100", "value": 57.389}, {"type": "map_at_1000", "value": 57.408}, {"type": "map_at_20", "value": 57.154}, {"type": "map_at_3", "value": 52.577}, {"type": "map_at_5", "value": 55.076}, {"type": "mrr_at_1", "value": 45.655}, {"type": "mrr_at_10", "value": 59.014}, {"type": "mrr_at_100", "value": 59.568}, {"type": "mrr_at_1000", "value": 59.580999999999996}, {"type": "mrr_at_20", "value": 59.41499999999999}, {"type": "mrr_at_3", "value": 55.88999999999999}, {"type": "mrr_at_5", "value": 57.879999999999995}, {"type": "ndcg_at_1", "value": 45.626}, {"type": "ndcg_at_10", "value": 63.778}, {"type": "ndcg_at_100", "value": 66.905}, {"type": "ndcg_at_1000", "value": 67.322}, {"type": "ndcg_at_20", "value": 65.521}, {"type": "ndcg_at_3", "value": 56.494}, {"type": "ndcg_at_5", "value": 60.553999999999995}, {"type": "precision_at_1", "value": 45.626}, {"type": "precision_at_10", "value": 9.942}, {"type": "precision_at_100", "value": 1.169}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_20", "value": 5.390000000000001}, {"type": "precision_at_3", "value": 25.135}, {"type": "precision_at_5", "value": 17.451}, {"type": "recall_at_1", "value": 40.716}, {"type": "recall_at_10", "value": 82.998}, {"type": "recall_at_100", "value": 96.236}, {"type": "recall_at_1000", "value": 99.31400000000001}, {"type": "recall_at_20", "value": 89.402}, {"type": "recall_at_3", "value": 64.47699999999999}, {"type": "recall_at_5", "value": 73.774}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "None", "config": "default", "split": "test", "revision": "e4e08e0b7dbe3c8700f0daef558ff32256715259"}, "metrics": [{"type": "map_at_1", "value": 71.679}, {"type": "map_at_10", "value": 85.63}, {"type": "map_at_100", "value": 86.24000000000001}, {"type": "map_at_1000", "value": 86.25500000000001}, {"type": "map_at_20", "value": 86.03}, {"type": "map_at_3", "value": 82.712}, {"type": "map_at_5", "value": 84.59400000000001}, {"type": "mrr_at_1", "value": 82.58}, {"type": "mrr_at_10", "value": 88.459}, {"type": "mrr_at_100", "value": 88.544}, {"type": "mrr_at_1000", "value": 88.545}, {"type": "mrr_at_20", "value": 88.521}, {"type": "mrr_at_3", "value": 87.548}, {"type": "mrr_at_5", "value": 88.19}, {"type": "ndcg_at_1", "value": 82.57}, {"type": "ndcg_at_10", "value": 89.205}, {"type": "ndcg_at_100", "value": 90.316}, {"type": "ndcg_at_1000", "value": 90.4}, {"type": "ndcg_at_20", "value": 89.802}, {"type": "ndcg_at_3", "value": 86.5}, {"type": "ndcg_at_5", "value": 88.06}, {"type": "precision_at_1", "value": 82.57}, {"type": "precision_at_10", "value": 13.511000000000001}, {"type": "precision_at_100", "value": 1.532}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_20", "value": 7.1499999999999995}, {"type": "precision_at_3", "value": 37.82}, {"type": "precision_at_5", "value": 24.892}, {"type": "recall_at_1", "value": 71.679}, {"type": "recall_at_10", "value": 95.926}, {"type": "recall_at_100", "value": 99.653}, {"type": "recall_at_1000", "value": 99.99}, {"type": "recall_at_20", "value": 97.81}, {"type": "recall_at_3", "value": 88.124}, {"type": "recall_at_5", "value": 92.535}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "None", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 58.980204279295776}, {"type": "v_measures", "value": [0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137, 0.6451280716471475, 0.645063311327467, 0.5315438986570028, 0.5664946021472431, 0.5738903466889544, 0.5276869089101741, 0.5904189978037212, 0.5603608879042441, 0.5568378389036701, 0.5726233719767458, 0.5477807586251173, 0.5827708688105891, 0.6065873110215666, 0.6036471736485209, 0.6912543733590332, 0.5432313459217541, 0.6228580641529852, 0.6752678197786052, 0.5716679708729834, 0.5654059124001324, 0.5454125044774013, 0.5704289785620336, 0.7083445261384431, 0.5977444086270381, 0.54260081746137]}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "None", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "v_measure", "value": 64.68385650734866}, {"type": "v_measures", "value": [0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608, 0.6743650530639286, 0.7047206687156294, 0.6557778331932691, 0.4282825632651972, 0.7434812486386112, 0.6326865724662851, 0.4058629298732522, 0.7451456136425593, 0.715316547891375, 0.7627466199847608]}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "None", "config": "default", "split": "test", "revision": "f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88"}, "metrics": [{"type": "map_at_1", "value": 4.1930000000000005}, {"type": "map_at_10", "value": 10.993}, {"type": "map_at_100", "value": 12.821}, {"type": "map_at_1000", "value": 13.094}, {"type": "map_at_20", "value": 11.899999999999999}, {"type": "map_at_3", "value": 7.753}, {"type": "map_at_5", "value": 9.479}, {"type": "mrr_at_1", "value": 20.7}, {"type": "mrr_at_10", "value": 31.776}, {"type": "mrr_at_100", "value": 32.863}, {"type": "mrr_at_1000", "value": 32.921}, {"type": "mrr_at_20", "value": 32.374}, {"type": "mrr_at_3", "value": 28.499999999999996}, {"type": "mrr_at_5", "value": 30.464999999999996}, {"type": "ndcg_at_1", "value": 20.7}, {"type": "ndcg_at_10", "value": 18.602}, {"type": "ndcg_at_100", "value": 26.063}, {"type": "ndcg_at_1000", "value": 30.988}, {"type": "ndcg_at_20", "value": 21.124000000000002}, {"type": "ndcg_at_3", "value": 17.538999999999998}, {"type": "ndcg_at_5", "value": 15.604999999999999}, {"type": "precision_at_1", "value": 20.7}, {"type": "precision_at_10", "value": 9.69}, {"type": "precision_at_100", "value": 2.051}, {"type": "precision_at_1000", "value": 0.32299999999999995}, {"type": "precision_at_20", "value": 6.3}, {"type": "precision_at_3", "value": 16.567}, {"type": "precision_at_5", "value": 13.96}, {"type": "recall_at_1", "value": 4.1930000000000005}, {"type": "recall_at_10", "value": 19.618}, {"type": "recall_at_100", "value": 41.643}, {"type": "recall_at_1000", "value": 65.693}, {"type": "recall_at_20", "value": 25.562}, {"type": "recall_at_3", "value": 10.062999999999999}, {"type": "recall_at_5", "value": 14.127999999999998}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "None", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.46613174654865}, {"type": "cos_sim_spearman", "value": 80.3049357832415}, {"type": "euclidean_pearson", "value": 81.26631332583317}, {"type": "euclidean_spearman", "value": 80.3154745166346}, {"type": "manhattan_pearson", "value": 81.14703159845031}, {"type": "manhattan_spearman", "value": 80.20912001232311}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "None", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.54049067032975}, {"type": "cos_sim_spearman", "value": 80.96545866938635}, {"type": "euclidean_pearson", "value": 83.96265705630466}, {"type": "euclidean_spearman", "value": 79.93146623957664}, {"type": "manhattan_pearson", "value": 83.90680327172007}, {"type": "manhattan_spearman", "value": 79.9387741861374}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "None", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.88551701212096}, {"type": "cos_sim_spearman", "value": 87.86522961782607}, {"type": "euclidean_pearson", "value": 87.36290945594213}, {"type": "euclidean_spearman", "value": 87.83062393537139}, {"type": "manhattan_pearson", "value": 87.32544594269082}, {"type": "manhattan_spearman", "value": 87.81556963071229}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "None", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.30880458174929}, {"type": "cos_sim_spearman", "value": 83.80166079353091}, {"type": "euclidean_pearson", "value": 85.32128873266257}, {"type": "euclidean_spearman", "value": 83.86251092262333}, {"type": "manhattan_pearson", "value": 85.2712567451151}, {"type": "manhattan_spearman", "value": 83.80950203378747}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "None", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.26254668067915}, {"type": "cos_sim_spearman", "value": 88.58702965856746}, {"type": "euclidean_pearson", "value": 87.9969808017743}, {"type": "euclidean_spearman", "value": 88.48082129802832}, {"type": "manhattan_pearson", "value": 88.005385920726}, {"type": "manhattan_spearman", "value": 88.48824252319064}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "None", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.9048844772477}, {"type": "cos_sim_spearman", "value": 86.81864160521327}, {"type": "euclidean_pearson", "value": 86.28264402848413}, {"type": "euclidean_spearman", "value": 86.78000025418731}, {"type": "manhattan_pearson", "value": 86.2441248990138}, {"type": "manhattan_spearman", "value": 86.75021285222047}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "None", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.489340312079}, {"type": "cos_sim_spearman", "value": 87.98810146323362}, {"type": "euclidean_pearson", "value": 89.17657344753519}, {"type": "euclidean_spearman", "value": 88.96877394433339}, {"type": "manhattan_pearson", "value": 89.17489837230771}, {"type": "manhattan_spearman", "value": 88.87394331518345}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "None", "config": "en", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 63.020191114515576}, {"type": "cos_sim_spearman", "value": 66.81821028889179}, {"type": "euclidean_pearson", "value": 66.11102477309004}, {"type": "euclidean_spearman", "value": 66.59000262767655}, {"type": "manhattan_pearson", "value": 66.0319349852117}, {"type": "manhattan_spearman", "value": 66.51366211903893}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "None", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.05763458617234}, {"type": "cos_sim_spearman", "value": 87.40353901525121}, {"type": "euclidean_pearson", "value": 87.43632331678887}, {"type": "euclidean_spearman", "value": 87.58631222421829}, {"type": "manhattan_pearson", "value": 87.40408795218912}, {"type": "manhattan_spearman", "value": 87.55530395433567}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "None", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 83.40728647106346}, {"type": "mrr", "value": 95.39606725881237}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "None", "config": "default", "split": "test", "revision": "0228b52cf27578f30900b9e5271d331663a030d7"}, "metrics": [{"type": "map_at_1", "value": 55.344}, {"type": "map_at_10", "value": 66.467}, {"type": "map_at_100", "value": 66.841}, {"type": "map_at_1000", "value": 66.86800000000001}, {"type": "map_at_20", "value": 66.728}, {"type": "map_at_3", "value": 62.888}, {"type": "map_at_5", "value": 65.10000000000001}, {"type": "mrr_at_1", "value": 58.333}, {"type": "mrr_at_10", "value": 67.471}, {"type": "mrr_at_100", "value": 67.75}, {"type": "mrr_at_1000", "value": 67.778}, {"type": "mrr_at_20", "value": 67.649}, {"type": "mrr_at_3", "value": 64.72200000000001}, {"type": "mrr_at_5", "value": 66.539}, {"type": "ndcg_at_1", "value": 58.333}, {"type": "ndcg_at_10", "value": 71.707}, {"type": "ndcg_at_100", "value": 73.301}, {"type": "ndcg_at_1000", "value": 74.053}, {"type": "ndcg_at_20", "value": 72.482}, {"type": "ndcg_at_3", "value": 65.561}, {"type": "ndcg_at_5", "value": 69.017}, {"type": "precision_at_1", "value": 58.333}, {"type": "precision_at_10", "value": 9.866999999999999}, {"type": "precision_at_100", "value": 1.0699999999999998}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_20", "value": 5.1}, {"type": "precision_at_3", "value": 25.778000000000002}, {"type": "precision_at_5", "value": 17.533}, {"type": "recall_at_1", "value": 55.344}, {"type": "recall_at_10", "value": 86.76700000000001}, {"type": "recall_at_100", "value": 94.0}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_20", "value": 89.60000000000001}, {"type": "recall_at_3", "value": 70.406}, {"type": "recall_at_5", "value": 79.106}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "None", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.71089108910891}, {"type": "cos_sim_ap", "value": 91.82444380538519}, {"type": "cos_sim_f1", "value": 85.34525583705911}, {"type": "cos_sim_precision", "value": 84.79763079960513}, {"type": "cos_sim_recall", "value": 85.9}, {"type": "dot_accuracy", "value": 99.56039603960396}, {"type": "dot_ap", "value": 84.71022538609428}, {"type": "dot_f1", "value": 76.18100447538538}, {"type": "dot_precision", "value": 75.76656775469831}, {"type": "dot_recall", "value": 76.6}, {"type": "euclidean_accuracy", "value": 99.7}, {"type": "euclidean_ap", "value": 91.68317023504792}, {"type": "euclidean_f1", "value": 84.65712876171682}, {"type": "euclidean_precision", "value": 83.54430379746836}, {"type": "euclidean_recall", "value": 85.8}, {"type": "manhattan_accuracy", "value": 99.69900990099009}, {"type": "manhattan_ap", "value": 91.5749511659937}, {"type": "manhattan_f1", "value": 84.6989141164857}, {"type": "manhattan_precision", "value": 83.62573099415205}, {"type": "manhattan_recall", "value": 85.8}, {"type": "max_accuracy", "value": 99.71089108910891}, {"type": "max_ap", "value": 91.82444380538519}, {"type": "max_f1", "value": 85.34525583705911}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "None", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 69.36504474977566}, {"type": "v_measures", "value": [0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465, 0.7576989668086949, 0.6941673973105086, 0.5999199814586392, 0.7009392860118014, 0.6911146596911227, 0.646390143058745, 0.6442231726625358, 0.7502350275519825, 0.6869636659371134, 0.6952444700037437, 0.763079972153315, 0.7984807201827683, 0.8009864921302298, 0.7022376752256222, 0.6419780898814442, 0.6918573402523567, 0.660312536947917, 0.6546073550319798, 0.6686135632697091, 0.6651974389583027, 0.6923843269406074, 0.6833654799568836, 0.6633431494438509, 0.7062277792579976, 0.6816924973160465]}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "None", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 34.72911995025639}, {"type": "v_measures", "value": [0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264, 0.3304415914259876, 0.34135448340648167, 0.339706731244524, 0.33071893172291084, 0.3317995254408912, 0.3738836068336685, 0.35451479317768203, 0.3555924499674302, 0.3592757088728364, 0.3556241729332264]}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "None", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 52.975020393803675}, {"type": "mrr", "value": 53.87404772515067}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "None", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.205065693047615}, {"type": "cos_sim_spearman", "value": 28.307951294409406}, {"type": "dot_pearson", "value": 29.15581947828465}, {"type": "dot_spearman", "value": 28.222470759389505}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "None", "config": "default", "split": "test", "revision": "bb9466bac8153a0349341eb1b22e06409e78ef4e"}, "metrics": [{"type": "map_at_1", "value": 0.249}, {"type": "map_at_10", "value": 2.243}, {"type": "map_at_100", "value": 13.791}, {"type": "map_at_1000", "value": 32.539}, {"type": "map_at_20", "value": 4.112}, {"type": "map_at_3", "value": 0.7060000000000001}, {"type": "map_at_5", "value": 1.1860000000000002}, {"type": "mrr_at_1", "value": 96.0}, {"type": "mrr_at_10", "value": 98.0}, {"type": "mrr_at_100", "value": 98.0}, {"type": "mrr_at_1000", "value": 98.0}, {"type": "mrr_at_20", "value": 98.0}, {"type": "mrr_at_3", "value": 98.0}, {"type": "mrr_at_5", "value": 98.0}, {"type": "ndcg_at_1", "value": 92.0}, {"type": "ndcg_at_10", "value": 86.083}, {"type": "ndcg_at_100", "value": 66.471}, {"type": "ndcg_at_1000", "value": 57.31699999999999}, {"type": "ndcg_at_20", "value": 82.783}, {"type": "ndcg_at_3", "value": 88.805}, {"type": "ndcg_at_5", "value": 88.96}, {"type": "precision_at_1", "value": 96.0}, {"type": "precision_at_10", "value": 91.2}, {"type": "precision_at_100", "value": 68.16}, {"type": "precision_at_1000", "value": 25.290000000000003}, {"type": "precision_at_20", "value": 86.9}, {"type": "precision_at_3", "value": 94.0}, {"type": "precision_at_5", "value": 94.39999999999999}, {"type": "recall_at_1", "value": 0.249}, {"type": "recall_at_10", "value": 2.3800000000000003}, {"type": "recall_at_100", "value": 16.45}, {"type": "recall_at_1000", "value": 53.1}, {"type": "recall_at_20", "value": 4.4670000000000005}, {"type": "recall_at_3", "value": 0.734}, {"type": "recall_at_5", "value": 1.246}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "None", "config": "default", "split": "test", "revision": "a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f"}, "metrics": [{"type": "map_at_1", "value": 3.2520000000000002}, {"type": "map_at_10", "value": 11.805}, {"type": "map_at_100", "value": 18.749}, {"type": "map_at_1000", "value": 20.416999999999998}, {"type": "map_at_20", "value": 14.685}, {"type": "map_at_3", "value": 6.6739999999999995}, {"type": "map_at_5", "value": 8.863}, {"type": "mrr_at_1", "value": 42.857}, {"type": "mrr_at_10", "value": 57.635999999999996}, {"type": "mrr_at_100", "value": 58.034}, {"type": "mrr_at_1000", "value": 58.048}, {"type": "mrr_at_20", "value": 57.979}, {"type": "mrr_at_3", "value": 54.422000000000004}, {"type": "mrr_at_5", "value": 56.15599999999999}, {"type": "ndcg_at_1", "value": 39.796}, {"type": "ndcg_at_10", "value": 30.263}, {"type": "ndcg_at_100", "value": 40.825}, {"type": "ndcg_at_1000", "value": 52.447}, {"type": "ndcg_at_20", "value": 30.453000000000003}, {"type": "ndcg_at_3", "value": 35.086}, {"type": "ndcg_at_5", "value": 31.947}, {"type": "precision_at_1", "value": 42.857}, {"type": "precision_at_10", "value": 26.327}, {"type": "precision_at_100", "value": 8.041}, {"type": "precision_at_1000", "value": 1.582}, {"type": "precision_at_20", "value": 19.592000000000002}, {"type": "precision_at_3", "value": 36.054}, {"type": "precision_at_5", "value": 31.019999999999996}, {"type": "recall_at_1", "value": 3.2520000000000002}, {"type": "recall_at_10", "value": 18.471}, {"type": "recall_at_100", "value": 49.08}, {"type": "recall_at_1000", "value": 84.733}, {"type": "recall_at_20", "value": 26.389000000000003}, {"type": "recall_at_3", "value": 8.051}, {"type": "recall_at_5", "value": 11.672}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "None", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 68.10546875}, {"type": "ap", "value": 12.899352291322325}, {"type": "f1", "value": 52.14484661172115}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "None", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 62.323146576117715}, {"type": "f1", "value": 62.6518883448989}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "None", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 51.261957327618525}, {"type": "v_measures", "value": [0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252, 0.4873375900729135, 0.5129229336124553, 0.515681357542704, 0.511464496088557, 0.5090884385457786, 0.5125351055552001, 0.5124982980752528, 0.517332919326808, 0.5232255784709567, 0.5241090154712252]}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "None", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 87.09542826488645}, {"type": "cos_sim_ap", "value": 77.72170475021885}, {"type": "cos_sim_f1", "value": 70.67669172932331}, {"type": "cos_sim_precision", "value": 64.5238614077141}, {"type": "cos_sim_recall", "value": 78.12664907651715}, {"type": "dot_accuracy", "value": 83.96614412588663}, {"type": "dot_ap", "value": 68.08590796036842}, {"type": "dot_f1", "value": 63.934426229508205}, {"type": "dot_precision", "value": 58.854860186418115}, {"type": "dot_recall", "value": 69.9736147757256}, {"type": "euclidean_accuracy", "value": 87.20271800679502}, {"type": "euclidean_ap", "value": 77.87533191263717}, {"type": "euclidean_f1", "value": 70.92216475337455}, {"type": "euclidean_precision", "value": 67.94778825235677}, {"type": "euclidean_recall", "value": 74.1688654353562}, {"type": "manhattan_accuracy", "value": 87.20867854801216}, {"type": "manhattan_ap", "value": 77.84249032925085}, {"type": "manhattan_f1", "value": 71.11665626949471}, {"type": "manhattan_precision", "value": 67.45562130177515}, {"type": "manhattan_recall", "value": 75.19788918205805}, {"type": "max_accuracy", "value": 87.20867854801216}, {"type": "max_ap", "value": 77.87533191263717}, {"type": "max_f1", "value": 71.11665626949471}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "None", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.22070865836147}, {"type": "cos_sim_ap", "value": 86.38617271379728}, {"type": "cos_sim_f1", "value": 78.946594085626}, {"type": "cos_sim_precision", "value": 75.5774647887324}, {"type": "cos_sim_recall", "value": 82.63012011087157}, {"type": "dot_accuracy", "value": 87.16963558039352}, {"type": "dot_ap", "value": 82.0965358395614}, {"type": "dot_f1", "value": 75.00997859138575}, {"type": "dot_precision", "value": 70.93541966920596}, {"type": "dot_recall", "value": 79.58115183246073}, {"type": "euclidean_accuracy", "value": 89.14891139830016}, {"type": "euclidean_ap", "value": 86.28000880804873}, {"type": "euclidean_f1", "value": 78.7341306347746}, {"type": "euclidean_precision", "value": 75.40706280397546}, {"type": "euclidean_recall", "value": 82.36834000615954}, {"type": "manhattan_accuracy", "value": 89.15279233127644}, {"type": "manhattan_ap", "value": 86.25024653784152}, {"type": "manhattan_f1", "value": 78.72760457406788}, {"type": "manhattan_precision", "value": 76.25369795800563}, {"type": "manhattan_recall", "value": 81.36741607637819}, {"type": "max_accuracy", "value": 89.22070865836147}, {"type": "max_ap", "value": 86.38617271379728}, {"type": "max_f1", "value": 78.946594085626}]}]}]}
dataset
null
535
ntc-ai/SDXL-LoRA-slider.ferocious-dragon
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2023-12-12T14:51:23Z
2024-02-06T00:31:44+00:00
2
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/ferocious dragon_17_3.0.png widget: - text: ferocious dragon output: url: images/ferocious dragon_17_3.0.png - text: ferocious dragon output: url: images/ferocious dragon_19_3.0.png - text: ferocious dragon output: url: images/ferocious dragon_20_3.0.png - text: ferocious dragon output: url: images/ferocious dragon_21_3.0.png - text: ferocious dragon output: url: images/ferocious dragon_22_3.0.png inference: false instance_prompt: ferocious dragon --- # ntcai.xyz slider - ferocious dragon (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/ferocious dragon_17_-3.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_17_0.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_17_3.0.png" width=256 height=256 /> | | <img src="images/ferocious dragon_19_-3.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_19_0.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_19_3.0.png" width=256 height=256 /> | | <img src="images/ferocious dragon_20_-3.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_20_0.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/b339ae6c-0c9c-4f59-bcd5-fd1ea7102aa7](https://sliders.ntcai.xyz/sliders/app/loras/b339ae6c-0c9c-4f59-bcd5-fd1ea7102aa7) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` ferocious dragon ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.ferocious-dragon', weight_name='ferocious dragon.safetensors', adapter_name="ferocious dragon") # Activate the LoRA pipe.set_adapters(["ferocious dragon"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, ferocious dragon" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
Non_BioNLP
# ntcai.xyz slider - ferocious dragon (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/ferocious dragon_17_-3.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_17_0.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_17_3.0.png" width=256 height=256 /> | | <img src="images/ferocious dragon_19_-3.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_19_0.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_19_3.0.png" width=256 height=256 /> | | <img src="images/ferocious dragon_20_-3.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_20_0.0.png" width=256 height=256 /> | <img src="images/ferocious dragon_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/b339ae6c-0c9c-4f59-bcd5-fd1ea7102aa7](https://sliders.ntcai.xyz/sliders/app/loras/b339ae6c-0c9c-4f59-bcd5-fd1ea7102aa7) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` ferocious dragon ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.ferocious-dragon', weight_name='ferocious dragon.safetensors', adapter_name="ferocious dragon") # Activate the LoRA pipe.set_adapters(["ferocious dragon"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, ferocious dragon" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
{"base_model": "stabilityai/stable-diffusion-xl-base-1.0", "language": ["en"], "license": "mit", "tags": ["text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "diffusers"], "thumbnail": "images/ferocious dragon_17_3.0.png", "widget": [{"text": "ferocious dragon", "output": {"url": "images/ferocious dragon_17_3.0.png"}}, {"text": "ferocious dragon", "output": {"url": "images/ferocious dragon_19_3.0.png"}}, {"text": "ferocious dragon", "output": {"url": "images/ferocious dragon_20_3.0.png"}}, {"text": "ferocious dragon", "output": {"url": "images/ferocious dragon_21_3.0.png"}}, {"text": "ferocious dragon", "output": {"url": "images/ferocious dragon_22_3.0.png"}}], "inference": false, "instance_prompt": "ferocious dragon"}
dataset
null
536
tsavage68/MedQA_L3_450steps_1e7rate_03beta_CSFTDPO
tsavage68
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-05-20T09:03:18Z
2024-05-20T09:09:13+00:00
5
0
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct license: llama3 tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_450steps_1e7rate_03beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_450steps_1e7rate_03beta_CSFTDPO This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6479 - Rewards/chosen: 0.1876 - Rewards/rejected: 0.0690 - Rewards/accuracies: 0.6637 - Rewards/margins: 0.1186 - Logps/rejected: -21.0864 - Logps/chosen: -17.5973 - Logits/rejected: -0.9362 - Logits/chosen: -0.9357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 450 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6938 | 0.0489 | 50 | 0.6934 | 0.0041 | 0.0042 | 0.5099 | -0.0000 | -21.3026 | -18.2088 | -0.9262 | -0.9257 | | 0.6807 | 0.0977 | 100 | 0.6781 | 0.1130 | 0.0788 | 0.6110 | 0.0343 | -21.0540 | -17.8459 | -0.9280 | -0.9275 | | 0.6689 | 0.1466 | 150 | 0.6622 | 0.1706 | 0.0922 | 0.6286 | 0.0784 | -21.0091 | -17.6540 | -0.9313 | -0.9308 | | 0.6589 | 0.1954 | 200 | 0.6569 | 0.1748 | 0.0827 | 0.6462 | 0.0921 | -21.0408 | -17.6401 | -0.9339 | -0.9334 | | 0.6798 | 0.2443 | 250 | 0.6507 | 0.1854 | 0.0751 | 0.6505 | 0.1103 | -21.0663 | -17.6047 | -0.9352 | -0.9347 | | 0.6402 | 0.2931 | 300 | 0.6482 | 0.1927 | 0.0761 | 0.6725 | 0.1166 | -21.0627 | -17.5802 | -0.9358 | -0.9352 | | 0.7088 | 0.3420 | 350 | 0.6481 | 0.1883 | 0.0698 | 0.6637 | 0.1185 | -21.0838 | -17.5951 | -0.9357 | -0.9352 | | 0.6301 | 0.3908 | 400 | 0.6487 | 0.1878 | 0.0712 | 0.6549 | 0.1166 | -21.0792 | -17.5965 | -0.9361 | -0.9356 | | 0.6454 | 0.4397 | 450 | 0.6479 | 0.1876 | 0.0690 | 0.6637 | 0.1186 | -21.0864 | -17.5973 | -0.9362 | -0.9357 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
[ "MEDQA" ]
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_450steps_1e7rate_03beta_CSFTDPO This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6479 - Rewards/chosen: 0.1876 - Rewards/rejected: 0.0690 - Rewards/accuracies: 0.6637 - Rewards/margins: 0.1186 - Logps/rejected: -21.0864 - Logps/chosen: -17.5973 - Logits/rejected: -0.9362 - Logits/chosen: -0.9357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 450 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6938 | 0.0489 | 50 | 0.6934 | 0.0041 | 0.0042 | 0.5099 | -0.0000 | -21.3026 | -18.2088 | -0.9262 | -0.9257 | | 0.6807 | 0.0977 | 100 | 0.6781 | 0.1130 | 0.0788 | 0.6110 | 0.0343 | -21.0540 | -17.8459 | -0.9280 | -0.9275 | | 0.6689 | 0.1466 | 150 | 0.6622 | 0.1706 | 0.0922 | 0.6286 | 0.0784 | -21.0091 | -17.6540 | -0.9313 | -0.9308 | | 0.6589 | 0.1954 | 200 | 0.6569 | 0.1748 | 0.0827 | 0.6462 | 0.0921 | -21.0408 | -17.6401 | -0.9339 | -0.9334 | | 0.6798 | 0.2443 | 250 | 0.6507 | 0.1854 | 0.0751 | 0.6505 | 0.1103 | -21.0663 | -17.6047 | -0.9352 | -0.9347 | | 0.6402 | 0.2931 | 300 | 0.6482 | 0.1927 | 0.0761 | 0.6725 | 0.1166 | -21.0627 | -17.5802 | -0.9358 | -0.9352 | | 0.7088 | 0.3420 | 350 | 0.6481 | 0.1883 | 0.0698 | 0.6637 | 0.1185 | -21.0838 | -17.5951 | -0.9357 | -0.9352 | | 0.6301 | 0.3908 | 400 | 0.6487 | 0.1878 | 0.0712 | 0.6549 | 0.1166 | -21.0792 | -17.5965 | -0.9361 | -0.9356 | | 0.6454 | 0.4397 | 450 | 0.6479 | 0.1876 | 0.0690 | 0.6637 | 0.1186 | -21.0864 | -17.5973 | -0.9362 | -0.9357 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
{"base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "license": "llama3", "tags": ["trl", "dpo", "generated_from_trainer"], "model-index": [{"name": "MedQA_L3_450steps_1e7rate_03beta_CSFTDPO", "results": []}]}
dataset
null
537
microsoft/BiomedParse
microsoft
null
[ "dataset:microsoft/BiomedParseData", "license:cc-by-nc-sa-4.0", "region:us" ]
2024-11-04T21:32:56Z
2025-04-08T00:06:27+00:00
178,879
68
--- datasets: - microsoft/BiomedParseData license: cc-by-nc-sa-4.0 --- This is the official model checkpoint repo for "A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities". [[`Code`](https://github.com/microsoft/BiomedParse)] [[`Paper`](https://aka.ms/biomedparse-paper)] [[`Demo`](https://microsoft.github.io/BiomedParse/)] [[`Data`](https://huggingface.co/datasets/microsoft/BiomedParseData)] Biomedical image analysis is fundamental for biomedical discovery in cell biology, pathology, radiology, and many other biomedical domains. BiomedParse is a biomedical foundation model for imaging parsing that can jointly conduct segmentation, detection, and recognition across 9 imaging modalities. Through joint learning, we can improve accuracy for individual tasks and enable novel applications such as segmenting all relevant objects in an image through a text prompt, rather than requiring users to laboriously specify the bounding box for each object. BiomedParse is broadly applicable, performing image segmentation across 9 imaging modalities. ### Installation ```sh git clone https://github.com/microsoft/BiomedParse.git cd BiomedParse conda create -n biomedparse python=3.9.19 conda activate biomedparse ``` Install Pytorch ```sh conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia ``` In case there is issue with detectron2 installation, make sure your pytorch version is compatible with CUDA version on your machine at https://pytorch.org/. Install dependencies ```sh pip install -r assets/requirements/requirements.txt ``` ### Model Setup ```sh from PIL import Image import torch from modeling.BaseModel import BaseModel from modeling import build_model from utilities.distributed import init_distributed from utilities.arguments import load_opt_from_config_files from utilities.constants import BIOMED_CLASSES from inference_utils.inference import interactive_infer_image from inference_utils.output_processing import check_mask_stats import numpy as np # Build model config opt = load_opt_from_config_files(["configs/biomedparse_inference.yaml"]) opt = init_distributed(opt) # Load model from pretrained weights pretrained_pth = 'hf_hub:microsoft/BiomedParse' model = BaseModel(opt, build_model(opt)).from_pretrained(pretrained_pth).eval().cuda() with torch.no_grad(): model.model.sem_seg_head.predictor.lang_encoder.get_text_embeddings(BIOMED_CLASSES + ["background"], is_eval=True) ``` ### Segmentation On Example Images ```sh # RGB image input of shape (H, W, 3). Currently only batch size 1 is supported. image = Image.open('examples/Part_1_516_pathology_breast.png', formats=['png']) image = image.convert('RGB') # text prompts querying objects in the image. Multiple ones can be provided. prompts = ['neoplastic cells', 'inflammatory cells'] # load ground truth mask gt_masks = [] for prompt in prompts: gt_mask = Image.open(f"examples/Part_1_516_pathology_breast_{prompt.replace(' ', '+')}.png", formats=['png']) gt_mask = 1*(np.array(gt_mask.convert('RGB'))[:,:,0] > 0) gt_masks.append(gt_mask) pred_mask = interactive_infer_image(model, image, prompts) # prediction with ground truth mask for i, pred in enumerate(pred_mask): gt = gt_masks[i] dice = (1*(pred>0.5) & gt).sum() * 2.0 / (1*(pred>0.5).sum() + gt.sum()) print(f'Dice score for {prompts[i]}: {dice:.4f}') check_mask_stats(image, pred_mask[i]*255, 'X-Ray-Chest', text_prompt[i]) print(f'p-value for {prompts[i]}: {p_value:.4f}') ``` ### Usage and License Notices The model described in this repository is provided for research and development use only. The model is not intended for use in clinical decision-making or for any other clinical use, and the performance of the model for clinical use has not been established. You bear sole responsibility for any use of this model, including incorporation into any product intended for clinical use. ### Citation Please cite our paper if you use the code, model, or data. Zhao, T., Gu, Y., Yang, J. et al. A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities. Nat Methods (2024). https://doi.org/10.1038/s41592-024-02499-w ``` @article{zhao2024biomedparse, title = {A foundation model for joint segmentation, detection, and recognition of biomedical objects across nine modalities}, author = {Zhao, Theodore and Gu, Yu and Yang, Jianwei and Usuyama, Naoto and Lee, Ho Hin and Kiblawi, Sid and Naumann, Tristan and Gao, Jianfeng and Crabtree, Angela and Abel, Jacob and Moung-Wen, Christine and Piening, Brian and Bifulco, Carlo and Wei, Mu and Poon, Hoifung and Wang, Sheng}, journal = {Nature Methods}, year = {2024}, publisher = {Nature Publishing Group UK London}, url = {https://www.nature.com/articles/s41592-024-02499-w}, doi = {10.1038/s41592-024-02499-w} } ``` ### Model Architecture BiomedParse is built upon a transformer-based architecture, optimized for processing large biomedical corpora. Leveraging multi-head attention mechanisms, it excels at identifying and understanding biomedical terminology, as well as extracting contextually relevant information from dense scientific texts. The model is pre-trained on vast biomedical datasets, allowing it to generalize across various biomedical domains with high accuracy. ### Evaluation Results Please see the paper for detailed information about methods and results. https://microsoft.github.io/BiomedParse/assets/BiomedParse_arxiv.pdf ### Fairness evaluation We conducted fairness evaluation for different sex and age groups. Two-sided independent t-test shows non-significant differences between female and male and between different age groups, with p-value > 5% for all imaging modalities and segmentation targets evaluated. ### Ethical Considerations and Limitations Microsoft believes Responsible AI is a shared responsibility and we have identified six principles and practices to help organizations address risks, innovate, and create value: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant use case and addresses unforeseen product misuse.  While testing the model with images and/or text, ensure that the data is PHI free and that there are no patient information or information that can be tracked to a patient identity. The model is not designed for the following use cases: - Use by clinicians to inform clinical decision-making, as a diagnostic tool or as a medical device - Although MedImageParse is highly accurate in parsing biomedical data, it is not desgined or intended to be deployed in clinical settings as-is not is it for use in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions (including to support clinical decision-making), or as a substitute of professional medical advice, diagnosis, treatment, or clinical judgment of a healthcare professional.  - Scenarios without consent for data - Any scenario that uses health data for a purpose for which consent was not obtained.   - Use outside of health scenarios - Any scenario that uses non-medical related image and/or serving purposes outside of the healthcare domain.  Please see Microsoft's Responsible AI Principles and approach available at https://www.microsoft.com/en-us/ai/principles-and-approach/ ### Data Specification for Deployment - The model expect 2D 8-bit RGB or grayscale images by default, with pixel values ranging from 0 to 255 and resolution 1024*1024. - The model outputs pixel probabilities in the same shape as the input image. We convert the floating point probabilities to 8-bit grayscale outputs. The probability threshold for segmentation mask is 0.5, which corresponds to 127.5 in 8-bit grayscale output. - The model takes in text prompts for segmentation and doesn't have a fixed number of targets to handle. However, to ensure quality performance, we recommend the following tasks based on evaluation results. However, as we only evaluated the model on the test split of BiomedParseData, there is no guarantee for the same performance on external datasets even for the same task, due to variation in device, preprocessing, resolution and other distribution shifts. For best performance, we recommend finetuning on your specific tasks. - CT: - abdomen: adrenal gland, aorta, bladder, duodenum, esophagus, gallbladder, kidney, kidney cyst, kidney tumor, left adrenal gland, left kidney, liver, pancreas, postcava, right adrenal gland, right kidney, spleen, stomach, tumor - colon: tumor - liver: liver, tumor - lung: COVID-19 infection, nodule - pelvis: uterus - MRI-FLAIR: brain: edema, lower-grade glioma, tumor, tumor core, whole tumor - MRI-T1-Gd: brain: enhancing tumor, tumor core - MRI-T2: prostate: prostate peripheral zone, prostate transitional zone, - MRI: - abdomen: aorta, esophagus, gallbladder, kidney, left kidney, liver, pancreas, postcava, right kidney, spleen, stomach - brain: anterior hippocampus, posterior hippocampus - heart: left heart atrium, left heart ventricle, myocardium, right heart ventricle - prostate: prostate - OCT: retinal: edema - X-Ray: chest: COVID-19 infection, left lung, lung, lung opacity, right lung, viral pneumonia - Dermoscopy: skin: lesion, melanoma - Endoscope: colon: neoplastic polyp, non-neoplastic polyp, polyp - Fundus: retinal: optic cup, optic disc, - Pathology: - bladder: neoplastic cells - breast: epithelial cells, neoplastic cells - cervix: neoplastic cells - colon: glandular structure, neoplastic cells - esophagus: neoplastic cells - kidney: neoplastic cells - liver: epithelial cells, neoplastic cells - ovarian: epithelial cells, neoplastic cells - prostate: neoplastic cells skin: neoplastic cells - stomach: neoplastic cells - testis: epithelial cells - thyroid: epithelial cells, neoplastic cells - uterus: neoplastic cells - Ultrasound: - breast: benign tumor, malignant tumor, tumor - heart: left heart atrium, left heart ventricle - transperineal: fetal head, public symphysis
[ "BEAR" ]
Non_BioNLP
This is the official model checkpoint repo for "A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities". [[`Code`](https://github.com/microsoft/BiomedParse)] [[`Paper`](https://aka.ms/biomedparse-paper)] [[`Demo`](https://microsoft.github.io/BiomedParse/)] [[`Data`](https://huggingface.co/datasets/microsoft/BiomedParseData)] Biomedical image analysis is fundamental for biomedical discovery in cell biology, pathology, radiology, and many other biomedical domains. BiomedParse is a biomedical foundation model for imaging parsing that can jointly conduct segmentation, detection, and recognition across 9 imaging modalities. Through joint learning, we can improve accuracy for individual tasks and enable novel applications such as segmenting all relevant objects in an image through a text prompt, rather than requiring users to laboriously specify the bounding box for each object. BiomedParse is broadly applicable, performing image segmentation across 9 imaging modalities. ### Installation ```sh git clone https://github.com/microsoft/BiomedParse.git cd BiomedParse conda create -n biomedparse python=3.9.19 conda activate biomedparse ``` Install Pytorch ```sh conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia ``` In case there is issue with detectron2 installation, make sure your pytorch version is compatible with CUDA version on your machine at https://pytorch.org/. Install dependencies ```sh pip install -r assets/requirements/requirements.txt ``` ### Model Setup ```sh from PIL import Image import torch from modeling.BaseModel import BaseModel from modeling import build_model from utilities.distributed import init_distributed from utilities.arguments import load_opt_from_config_files from utilities.constants import BIOMED_CLASSES from inference_utils.inference import interactive_infer_image from inference_utils.output_processing import check_mask_stats import numpy as np # Build model config opt = load_opt_from_config_files(["configs/biomedparse_inference.yaml"]) opt = init_distributed(opt) # Load model from pretrained weights pretrained_pth = 'hf_hub:microsoft/BiomedParse' model = BaseModel(opt, build_model(opt)).from_pretrained(pretrained_pth).eval().cuda() with torch.no_grad(): model.model.sem_seg_head.predictor.lang_encoder.get_text_embeddings(BIOMED_CLASSES + ["background"], is_eval=True) ``` ### Segmentation On Example Images ```sh # RGB image input of shape (H, W, 3). Currently only batch size 1 is supported. image = Image.open('examples/Part_1_516_pathology_breast.png', formats=['png']) image = image.convert('RGB') # text prompts querying objects in the image. Multiple ones can be provided. prompts = ['neoplastic cells', 'inflammatory cells'] # load ground truth mask gt_masks = [] for prompt in prompts: gt_mask = Image.open(f"examples/Part_1_516_pathology_breast_{prompt.replace(' ', '+')}.png", formats=['png']) gt_mask = 1*(np.array(gt_mask.convert('RGB'))[:,:,0] > 0) gt_masks.append(gt_mask) pred_mask = interactive_infer_image(model, image, prompts) # prediction with ground truth mask for i, pred in enumerate(pred_mask): gt = gt_masks[i] dice = (1*(pred>0.5) & gt).sum() * 2.0 / (1*(pred>0.5).sum() + gt.sum()) print(f'Dice score for {prompts[i]}: {dice:.4f}') check_mask_stats(image, pred_mask[i]*255, 'X-Ray-Chest', text_prompt[i]) print(f'p-value for {prompts[i]}: {p_value:.4f}') ``` ### Usage and License Notices The model described in this repository is provided for research and development use only. The model is not intended for use in clinical decision-making or for any other clinical use, and the performance of the model for clinical use has not been established. You bear sole responsibility for any use of this model, including incorporation into any product intended for clinical use. ### Citation Please cite our paper if you use the code, model, or data. Zhao, T., Gu, Y., Yang, J. et al. A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities. Nat Methods (2024). https://doi.org/10.1038/s41592-024-02499-w ``` @article{zhao2024biomedparse, title = {A foundation model for joint segmentation, detection, and recognition of biomedical objects across nine modalities}, author = {Zhao, Theodore and Gu, Yu and Yang, Jianwei and Usuyama, Naoto and Lee, Ho Hin and Kiblawi, Sid and Naumann, Tristan and Gao, Jianfeng and Crabtree, Angela and Abel, Jacob and Moung-Wen, Christine and Piening, Brian and Bifulco, Carlo and Wei, Mu and Poon, Hoifung and Wang, Sheng}, journal = {Nature Methods}, year = {2024}, publisher = {Nature Publishing Group UK London}, url = {https://www.nature.com/articles/s41592-024-02499-w}, doi = {10.1038/s41592-024-02499-w} } ``` ### Model Architecture BiomedParse is built upon a transformer-based architecture, optimized for processing large biomedical corpora. Leveraging multi-head attention mechanisms, it excels at identifying and understanding biomedical terminology, as well as extracting contextually relevant information from dense scientific texts. The model is pre-trained on vast biomedical datasets, allowing it to generalize across various biomedical domains with high accuracy. ### Evaluation Results Please see the paper for detailed information about methods and results. https://microsoft.github.io/BiomedParse/assets/BiomedParse_arxiv.pdf ### Fairness evaluation We conducted fairness evaluation for different sex and age groups. Two-sided independent t-test shows non-significant differences between female and male and between different age groups, with p-value > 5% for all imaging modalities and segmentation targets evaluated. ### Ethical Considerations and Limitations Microsoft believes Responsible AI is a shared responsibility and we have identified six principles and practices to help organizations address risks, innovate, and create value: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant use case and addresses unforeseen product misuse.  While testing the model with images and/or text, ensure that the data is PHI free and that there are no patient information or information that can be tracked to a patient identity. The model is not designed for the following use cases: - Use by clinicians to inform clinical decision-making, as a diagnostic tool or as a medical device - Although MedImageParse is highly accurate in parsing biomedical data, it is not desgined or intended to be deployed in clinical settings as-is not is it for use in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions (including to support clinical decision-making), or as a substitute of professional medical advice, diagnosis, treatment, or clinical judgment of a healthcare professional.  - Scenarios without consent for data - Any scenario that uses health data for a purpose for which consent was not obtained.   - Use outside of health scenarios - Any scenario that uses non-medical related image and/or serving purposes outside of the healthcare domain.  Please see Microsoft's Responsible AI Principles and approach available at https://www.microsoft.com/en-us/ai/principles-and-approach/ ### Data Specification for Deployment - The model expect 2D 8-bit RGB or grayscale images by default, with pixel values ranging from 0 to 255 and resolution 1024*1024. - The model outputs pixel probabilities in the same shape as the input image. We convert the floating point probabilities to 8-bit grayscale outputs. The probability threshold for segmentation mask is 0.5, which corresponds to 127.5 in 8-bit grayscale output. - The model takes in text prompts for segmentation and doesn't have a fixed number of targets to handle. However, to ensure quality performance, we recommend the following tasks based on evaluation results. However, as we only evaluated the model on the test split of BiomedParseData, there is no guarantee for the same performance on external datasets even for the same task, due to variation in device, preprocessing, resolution and other distribution shifts. For best performance, we recommend finetuning on your specific tasks. - CT: - abdomen: adrenal gland, aorta, bladder, duodenum, esophagus, gallbladder, kidney, kidney cyst, kidney tumor, left adrenal gland, left kidney, liver, pancreas, postcava, right adrenal gland, right kidney, spleen, stomach, tumor - colon: tumor - liver: liver, tumor - lung: COVID-19 infection, nodule - pelvis: uterus - MRI-FLAIR: brain: edema, lower-grade glioma, tumor, tumor core, whole tumor - MRI-T1-Gd: brain: enhancing tumor, tumor core - MRI-T2: prostate: prostate peripheral zone, prostate transitional zone, - MRI: - abdomen: aorta, esophagus, gallbladder, kidney, left kidney, liver, pancreas, postcava, right kidney, spleen, stomach - brain: anterior hippocampus, posterior hippocampus - heart: left heart atrium, left heart ventricle, myocardium, right heart ventricle - prostate: prostate - OCT: retinal: edema - X-Ray: chest: COVID-19 infection, left lung, lung, lung opacity, right lung, viral pneumonia - Dermoscopy: skin: lesion, melanoma - Endoscope: colon: neoplastic polyp, non-neoplastic polyp, polyp - Fundus: retinal: optic cup, optic disc, - Pathology: - bladder: neoplastic cells - breast: epithelial cells, neoplastic cells - cervix: neoplastic cells - colon: glandular structure, neoplastic cells - esophagus: neoplastic cells - kidney: neoplastic cells - liver: epithelial cells, neoplastic cells - ovarian: epithelial cells, neoplastic cells - prostate: neoplastic cells skin: neoplastic cells - stomach: neoplastic cells - testis: epithelial cells - thyroid: epithelial cells, neoplastic cells - uterus: neoplastic cells - Ultrasound: - breast: benign tumor, malignant tumor, tumor - heart: left heart atrium, left heart ventricle - transperineal: fetal head, public symphysis
{"datasets": ["microsoft/BiomedParseData"], "license": "cc-by-nc-sa-4.0"}
dataset
null
538
davidpeer/gte-small
davidpeer
sentence-similarity
[ "sentence-transformers", "pytorch", "bert", "mteb", "sentence-similarity", "Sentence Transformers", "en", "arxiv:2308.03281", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-09-25T11:28:48Z
2023-09-25T11:32:52+00:00
13
0
--- language: - en license: mit tags: - mteb - sentence-similarity - sentence-transformers - Sentence Transformers model-index: - name: gte-small results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.22388059701493 - type: ap value: 36.09895941426988 - type: f1 value: 67.3205651539195 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.81894999999999 - type: ap value: 88.5240138417305 - type: f1 value: 91.80367382706962 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.032 - type: f1 value: 47.4490665674719 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.604 - type: map_at_100 value: 47.535 - type: map_at_1000 value: 47.538000000000004 - type: map_at_3 value: 41.833 - type: map_at_5 value: 44.61 - type: mrr_at_1 value: 31.223 - type: mrr_at_10 value: 46.794000000000004 - type: mrr_at_100 value: 47.725 - type: mrr_at_1000 value: 47.727000000000004 - type: mrr_at_3 value: 42.07 - type: mrr_at_5 value: 44.812000000000005 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 55.440999999999995 - type: ndcg_at_100 value: 59.134 - type: ndcg_at_1000 value: 59.199 - type: ndcg_at_3 value: 45.599000000000004 - type: ndcg_at_5 value: 50.637 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.364 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.848000000000003 - type: precision_at_5 value: 13.77 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 83.64200000000001 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 56.543 - type: recall_at_5 value: 68.848 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.90178078197678 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.25728393431922 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 61.720297062897764 - type: mrr value: 75.24139295607439 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 89.43527309184616 - type: cos_sim_spearman value: 88.17128615100206 - type: euclidean_pearson value: 87.89922623089282 - type: euclidean_spearman value: 87.96104039655451 - type: manhattan_pearson value: 87.9818290932077 - type: manhattan_spearman value: 88.00923426576885 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.0844155844156 - type: f1 value: 84.01485017302213 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.36574769259432 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 35.4857033165287 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.261 - type: map_at_10 value: 42.419000000000004 - type: map_at_100 value: 43.927 - type: map_at_1000 value: 44.055 - type: map_at_3 value: 38.597 - type: map_at_5 value: 40.701 - type: mrr_at_1 value: 36.91 - type: mrr_at_10 value: 48.02 - type: mrr_at_100 value: 48.658 - type: mrr_at_1000 value: 48.708 - type: mrr_at_3 value: 44.945 - type: mrr_at_5 value: 46.705000000000005 - type: ndcg_at_1 value: 36.91 - type: ndcg_at_10 value: 49.353 - type: ndcg_at_100 value: 54.456 - type: ndcg_at_1000 value: 56.363 - type: ndcg_at_3 value: 43.483 - type: ndcg_at_5 value: 46.150999999999996 - type: precision_at_1 value: 36.91 - type: precision_at_10 value: 9.700000000000001 - type: precision_at_100 value: 1.557 - type: precision_at_1000 value: 0.202 - type: precision_at_3 value: 21.078 - type: precision_at_5 value: 15.421999999999999 - type: recall_at_1 value: 30.261 - type: recall_at_10 value: 63.242 - type: recall_at_100 value: 84.09100000000001 - type: recall_at_1000 value: 96.143 - type: recall_at_3 value: 46.478 - type: recall_at_5 value: 53.708 - type: map_at_1 value: 31.145 - type: map_at_10 value: 40.996 - type: map_at_100 value: 42.266999999999996 - type: map_at_1000 value: 42.397 - type: map_at_3 value: 38.005 - type: map_at_5 value: 39.628 - type: mrr_at_1 value: 38.344 - type: mrr_at_10 value: 46.827000000000005 - type: mrr_at_100 value: 47.446 - type: mrr_at_1000 value: 47.489 - type: mrr_at_3 value: 44.448 - type: mrr_at_5 value: 45.747 - type: ndcg_at_1 value: 38.344 - type: ndcg_at_10 value: 46.733000000000004 - type: ndcg_at_100 value: 51.103 - type: ndcg_at_1000 value: 53.075 - type: ndcg_at_3 value: 42.366 - type: ndcg_at_5 value: 44.242 - type: precision_at_1 value: 38.344 - type: precision_at_10 value: 8.822000000000001 - type: precision_at_100 value: 1.417 - type: precision_at_1000 value: 0.187 - type: precision_at_3 value: 20.403 - type: precision_at_5 value: 14.306 - type: recall_at_1 value: 31.145 - type: recall_at_10 value: 56.909 - type: recall_at_100 value: 75.274 - type: recall_at_1000 value: 87.629 - type: recall_at_3 value: 43.784 - type: recall_at_5 value: 49.338 - type: map_at_1 value: 38.83 - type: map_at_10 value: 51.553000000000004 - type: map_at_100 value: 52.581 - type: map_at_1000 value: 52.638 - type: map_at_3 value: 48.112 - type: map_at_5 value: 50.095 - type: mrr_at_1 value: 44.513999999999996 - type: mrr_at_10 value: 54.998000000000005 - type: mrr_at_100 value: 55.650999999999996 - type: mrr_at_1000 value: 55.679 - type: mrr_at_3 value: 52.602000000000004 - type: mrr_at_5 value: 53.931 - type: ndcg_at_1 value: 44.513999999999996 - type: ndcg_at_10 value: 57.67400000000001 - type: ndcg_at_100 value: 61.663999999999994 - type: ndcg_at_1000 value: 62.743 - type: ndcg_at_3 value: 51.964 - type: ndcg_at_5 value: 54.773 - type: precision_at_1 value: 44.513999999999996 - type: precision_at_10 value: 9.423 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 23.323 - type: precision_at_5 value: 16.163 - type: recall_at_1 value: 38.83 - type: recall_at_10 value: 72.327 - type: recall_at_100 value: 89.519 - type: recall_at_1000 value: 97.041 - type: recall_at_3 value: 57.206 - type: recall_at_5 value: 63.88399999999999 - type: map_at_1 value: 25.484 - type: map_at_10 value: 34.527 - type: map_at_100 value: 35.661 - type: map_at_1000 value: 35.739 - type: map_at_3 value: 32.199 - type: map_at_5 value: 33.632 - type: mrr_at_1 value: 27.458 - type: mrr_at_10 value: 36.543 - type: mrr_at_100 value: 37.482 - type: mrr_at_1000 value: 37.543 - type: mrr_at_3 value: 34.256 - type: mrr_at_5 value: 35.618 - type: ndcg_at_1 value: 27.458 - type: ndcg_at_10 value: 39.396 - type: ndcg_at_100 value: 44.742 - type: ndcg_at_1000 value: 46.708 - type: ndcg_at_3 value: 34.817 - type: ndcg_at_5 value: 37.247 - type: precision_at_1 value: 27.458 - type: precision_at_10 value: 5.976999999999999 - type: precision_at_100 value: 0.907 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 14.878 - type: precision_at_5 value: 10.35 - type: recall_at_1 value: 25.484 - type: recall_at_10 value: 52.317 - type: recall_at_100 value: 76.701 - type: recall_at_1000 value: 91.408 - type: recall_at_3 value: 40.043 - type: recall_at_5 value: 45.879 - type: map_at_1 value: 16.719 - type: map_at_10 value: 25.269000000000002 - type: map_at_100 value: 26.442 - type: map_at_1000 value: 26.557 - type: map_at_3 value: 22.56 - type: map_at_5 value: 24.082 - type: mrr_at_1 value: 20.896 - type: mrr_at_10 value: 29.982999999999997 - type: mrr_at_100 value: 30.895 - type: mrr_at_1000 value: 30.961 - type: mrr_at_3 value: 27.239 - type: mrr_at_5 value: 28.787000000000003 - type: ndcg_at_1 value: 20.896 - type: ndcg_at_10 value: 30.814000000000004 - type: ndcg_at_100 value: 36.418 - type: ndcg_at_1000 value: 39.182 - type: ndcg_at_3 value: 25.807999999999996 - type: ndcg_at_5 value: 28.143 - type: precision_at_1 value: 20.896 - type: precision_at_10 value: 5.821 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 12.562000000000001 - type: precision_at_5 value: 9.254 - type: recall_at_1 value: 16.719 - type: recall_at_10 value: 43.155 - type: recall_at_100 value: 67.831 - type: recall_at_1000 value: 87.617 - type: recall_at_3 value: 29.259 - type: recall_at_5 value: 35.260999999999996 - type: map_at_1 value: 29.398999999999997 - type: map_at_10 value: 39.876 - type: map_at_100 value: 41.205999999999996 - type: map_at_1000 value: 41.321999999999996 - type: map_at_3 value: 36.588 - type: map_at_5 value: 38.538 - type: mrr_at_1 value: 35.9 - type: mrr_at_10 value: 45.528 - type: mrr_at_100 value: 46.343 - type: mrr_at_1000 value: 46.388 - type: mrr_at_3 value: 42.862 - type: mrr_at_5 value: 44.440000000000005 - type: ndcg_at_1 value: 35.9 - type: ndcg_at_10 value: 45.987 - type: ndcg_at_100 value: 51.370000000000005 - type: ndcg_at_1000 value: 53.400000000000006 - type: ndcg_at_3 value: 40.841 - type: ndcg_at_5 value: 43.447 - type: precision_at_1 value: 35.9 - type: precision_at_10 value: 8.393 - type: precision_at_100 value: 1.283 - type: precision_at_1000 value: 0.166 - type: precision_at_3 value: 19.538 - type: precision_at_5 value: 13.975000000000001 - type: recall_at_1 value: 29.398999999999997 - type: recall_at_10 value: 58.361 - type: recall_at_100 value: 81.081 - type: recall_at_1000 value: 94.004 - type: recall_at_3 value: 43.657000000000004 - type: recall_at_5 value: 50.519999999999996 - type: map_at_1 value: 21.589 - type: map_at_10 value: 31.608999999999998 - type: map_at_100 value: 33.128 - type: map_at_1000 value: 33.247 - type: map_at_3 value: 28.671999999999997 - type: map_at_5 value: 30.233999999999998 - type: mrr_at_1 value: 26.712000000000003 - type: mrr_at_10 value: 36.713 - type: mrr_at_100 value: 37.713 - type: mrr_at_1000 value: 37.771 - type: mrr_at_3 value: 34.075 - type: mrr_at_5 value: 35.451 - type: ndcg_at_1 value: 26.712000000000003 - type: ndcg_at_10 value: 37.519999999999996 - type: ndcg_at_100 value: 43.946000000000005 - type: ndcg_at_1000 value: 46.297 - type: ndcg_at_3 value: 32.551 - type: ndcg_at_5 value: 34.660999999999994 - type: precision_at_1 value: 26.712000000000003 - type: precision_at_10 value: 7.066 - type: precision_at_100 value: 1.216 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 15.906 - type: precision_at_5 value: 11.437999999999999 - type: recall_at_1 value: 21.589 - type: recall_at_10 value: 50.090999999999994 - type: recall_at_100 value: 77.43900000000001 - type: recall_at_1000 value: 93.35900000000001 - type: recall_at_3 value: 36.028999999999996 - type: recall_at_5 value: 41.698 - type: map_at_1 value: 25.121666666666663 - type: map_at_10 value: 34.46258333333334 - type: map_at_100 value: 35.710499999999996 - type: map_at_1000 value: 35.82691666666666 - type: map_at_3 value: 31.563249999999996 - type: map_at_5 value: 33.189750000000004 - type: mrr_at_1 value: 29.66441666666667 - type: mrr_at_10 value: 38.5455 - type: mrr_at_100 value: 39.39566666666667 - type: mrr_at_1000 value: 39.45325 - type: mrr_at_3 value: 36.003333333333345 - type: mrr_at_5 value: 37.440916666666666 - type: ndcg_at_1 value: 29.66441666666667 - type: ndcg_at_10 value: 39.978416666666675 - type: ndcg_at_100 value: 45.278666666666666 - type: ndcg_at_1000 value: 47.52275 - type: ndcg_at_3 value: 35.00058333333334 - type: ndcg_at_5 value: 37.34908333333333 - type: precision_at_1 value: 29.66441666666667 - type: precision_at_10 value: 7.094500000000001 - type: precision_at_100 value: 1.1523333333333332 - type: precision_at_1000 value: 0.15358333333333332 - type: precision_at_3 value: 16.184166666666663 - type: precision_at_5 value: 11.6005 - type: recall_at_1 value: 25.121666666666663 - type: recall_at_10 value: 52.23975000000001 - type: recall_at_100 value: 75.48408333333333 - type: recall_at_1000 value: 90.95316666666668 - type: recall_at_3 value: 38.38458333333333 - type: recall_at_5 value: 44.39933333333333 - type: map_at_1 value: 23.569000000000003 - type: map_at_10 value: 30.389 - type: map_at_100 value: 31.396 - type: map_at_1000 value: 31.493 - type: map_at_3 value: 28.276 - type: map_at_5 value: 29.459000000000003 - type: mrr_at_1 value: 26.534000000000002 - type: mrr_at_10 value: 33.217999999999996 - type: mrr_at_100 value: 34.054 - type: mrr_at_1000 value: 34.12 - type: mrr_at_3 value: 31.058000000000003 - type: mrr_at_5 value: 32.330999999999996 - type: ndcg_at_1 value: 26.534000000000002 - type: ndcg_at_10 value: 34.608 - type: ndcg_at_100 value: 39.391999999999996 - type: ndcg_at_1000 value: 41.837999999999994 - type: ndcg_at_3 value: 30.564999999999998 - type: ndcg_at_5 value: 32.509 - type: precision_at_1 value: 26.534000000000002 - type: precision_at_10 value: 5.414 - type: precision_at_100 value: 0.847 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 12.986 - type: precision_at_5 value: 9.202 - type: recall_at_1 value: 23.569000000000003 - type: recall_at_10 value: 44.896 - type: recall_at_100 value: 66.476 - type: recall_at_1000 value: 84.548 - type: recall_at_3 value: 33.79 - type: recall_at_5 value: 38.512 - type: map_at_1 value: 16.36 - type: map_at_10 value: 23.57 - type: map_at_100 value: 24.698999999999998 - type: map_at_1000 value: 24.834999999999997 - type: map_at_3 value: 21.093 - type: map_at_5 value: 22.418 - type: mrr_at_1 value: 19.718 - type: mrr_at_10 value: 27.139999999999997 - type: mrr_at_100 value: 28.097 - type: mrr_at_1000 value: 28.177999999999997 - type: mrr_at_3 value: 24.805 - type: mrr_at_5 value: 26.121 - type: ndcg_at_1 value: 19.718 - type: ndcg_at_10 value: 28.238999999999997 - type: ndcg_at_100 value: 33.663 - type: ndcg_at_1000 value: 36.763 - type: ndcg_at_3 value: 23.747 - type: ndcg_at_5 value: 25.796000000000003 - type: precision_at_1 value: 19.718 - type: precision_at_10 value: 5.282 - type: precision_at_100 value: 0.9390000000000001 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 11.264000000000001 - type: precision_at_5 value: 8.341 - type: recall_at_1 value: 16.36 - type: recall_at_10 value: 38.669 - type: recall_at_100 value: 63.184 - type: recall_at_1000 value: 85.33800000000001 - type: recall_at_3 value: 26.214 - type: recall_at_5 value: 31.423000000000002 - type: map_at_1 value: 25.618999999999996 - type: map_at_10 value: 34.361999999999995 - type: map_at_100 value: 35.534 - type: map_at_1000 value: 35.634 - type: map_at_3 value: 31.402 - type: map_at_5 value: 32.815 - type: mrr_at_1 value: 30.037000000000003 - type: mrr_at_10 value: 38.284 - type: mrr_at_100 value: 39.141999999999996 - type: mrr_at_1000 value: 39.2 - type: mrr_at_3 value: 35.603 - type: mrr_at_5 value: 36.867 - type: ndcg_at_1 value: 30.037000000000003 - type: ndcg_at_10 value: 39.87 - type: ndcg_at_100 value: 45.243 - type: ndcg_at_1000 value: 47.507 - type: ndcg_at_3 value: 34.371 - type: ndcg_at_5 value: 36.521 - type: precision_at_1 value: 30.037000000000003 - type: precision_at_10 value: 6.819 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 15.392 - type: precision_at_5 value: 10.821 - type: recall_at_1 value: 25.618999999999996 - type: recall_at_10 value: 52.869 - type: recall_at_100 value: 76.395 - type: recall_at_1000 value: 92.19500000000001 - type: recall_at_3 value: 37.943 - type: recall_at_5 value: 43.342999999999996 - type: map_at_1 value: 23.283 - type: map_at_10 value: 32.155 - type: map_at_100 value: 33.724 - type: map_at_1000 value: 33.939 - type: map_at_3 value: 29.018 - type: map_at_5 value: 30.864000000000004 - type: mrr_at_1 value: 28.063 - type: mrr_at_10 value: 36.632 - type: mrr_at_100 value: 37.606 - type: mrr_at_1000 value: 37.671 - type: mrr_at_3 value: 33.992 - type: mrr_at_5 value: 35.613 - type: ndcg_at_1 value: 28.063 - type: ndcg_at_10 value: 38.024 - type: ndcg_at_100 value: 44.292 - type: ndcg_at_1000 value: 46.818 - type: ndcg_at_3 value: 32.965 - type: ndcg_at_5 value: 35.562 - type: precision_at_1 value: 28.063 - type: precision_at_10 value: 7.352 - type: precision_at_100 value: 1.514 - type: precision_at_1000 value: 0.23800000000000002 - type: precision_at_3 value: 15.481 - type: precision_at_5 value: 11.542 - type: recall_at_1 value: 23.283 - type: recall_at_10 value: 49.756 - type: recall_at_100 value: 78.05 - type: recall_at_1000 value: 93.854 - type: recall_at_3 value: 35.408 - type: recall_at_5 value: 42.187000000000005 - type: map_at_1 value: 19.201999999999998 - type: map_at_10 value: 26.826 - type: map_at_100 value: 27.961000000000002 - type: map_at_1000 value: 28.066999999999997 - type: map_at_3 value: 24.237000000000002 - type: map_at_5 value: 25.811 - type: mrr_at_1 value: 20.887 - type: mrr_at_10 value: 28.660000000000004 - type: mrr_at_100 value: 29.660999999999998 - type: mrr_at_1000 value: 29.731 - type: mrr_at_3 value: 26.155 - type: mrr_at_5 value: 27.68 - type: ndcg_at_1 value: 20.887 - type: ndcg_at_10 value: 31.523 - type: ndcg_at_100 value: 37.055 - type: ndcg_at_1000 value: 39.579 - type: ndcg_at_3 value: 26.529000000000003 - type: ndcg_at_5 value: 29.137 - type: precision_at_1 value: 20.887 - type: precision_at_10 value: 5.065 - type: precision_at_100 value: 0.856 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 11.399 - type: precision_at_5 value: 8.392 - type: recall_at_1 value: 19.201999999999998 - type: recall_at_10 value: 44.285000000000004 - type: recall_at_100 value: 69.768 - type: recall_at_1000 value: 88.302 - type: recall_at_3 value: 30.804 - type: recall_at_5 value: 37.039 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 11.244 - type: map_at_10 value: 18.956 - type: map_at_100 value: 20.674 - type: map_at_1000 value: 20.863 - type: map_at_3 value: 15.923000000000002 - type: map_at_5 value: 17.518 - type: mrr_at_1 value: 25.080999999999996 - type: mrr_at_10 value: 35.94 - type: mrr_at_100 value: 36.969 - type: mrr_at_1000 value: 37.013 - type: mrr_at_3 value: 32.617000000000004 - type: mrr_at_5 value: 34.682 - type: ndcg_at_1 value: 25.080999999999996 - type: ndcg_at_10 value: 26.539 - type: ndcg_at_100 value: 33.601 - type: ndcg_at_1000 value: 37.203 - type: ndcg_at_3 value: 21.695999999999998 - type: ndcg_at_5 value: 23.567 - type: precision_at_1 value: 25.080999999999996 - type: precision_at_10 value: 8.143 - type: precision_at_100 value: 1.5650000000000002 - type: precision_at_1000 value: 0.22300000000000003 - type: precision_at_3 value: 15.983 - type: precision_at_5 value: 12.417 - type: recall_at_1 value: 11.244 - type: recall_at_10 value: 31.457 - type: recall_at_100 value: 55.92 - type: recall_at_1000 value: 76.372 - type: recall_at_3 value: 19.784 - type: recall_at_5 value: 24.857000000000003 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.595 - type: map_at_10 value: 18.75 - type: map_at_100 value: 26.354 - type: map_at_1000 value: 27.912 - type: map_at_3 value: 13.794 - type: map_at_5 value: 16.021 - type: mrr_at_1 value: 65.75 - type: mrr_at_10 value: 73.837 - type: mrr_at_100 value: 74.22800000000001 - type: mrr_at_1000 value: 74.234 - type: mrr_at_3 value: 72.5 - type: mrr_at_5 value: 73.387 - type: ndcg_at_1 value: 52.625 - type: ndcg_at_10 value: 39.101 - type: ndcg_at_100 value: 43.836000000000006 - type: ndcg_at_1000 value: 51.086 - type: ndcg_at_3 value: 44.229 - type: ndcg_at_5 value: 41.555 - type: precision_at_1 value: 65.75 - type: precision_at_10 value: 30.45 - type: precision_at_100 value: 9.81 - type: precision_at_1000 value: 2.045 - type: precision_at_3 value: 48.667 - type: precision_at_5 value: 40.8 - type: recall_at_1 value: 8.595 - type: recall_at_10 value: 24.201 - type: recall_at_100 value: 50.096 - type: recall_at_1000 value: 72.677 - type: recall_at_3 value: 15.212 - type: recall_at_5 value: 18.745 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.565 - type: f1 value: 41.49914329345582 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 66.60000000000001 - type: map_at_10 value: 76.838 - type: map_at_100 value: 77.076 - type: map_at_1000 value: 77.09 - type: map_at_3 value: 75.545 - type: map_at_5 value: 76.39 - type: mrr_at_1 value: 71.707 - type: mrr_at_10 value: 81.514 - type: mrr_at_100 value: 81.64099999999999 - type: mrr_at_1000 value: 81.645 - type: mrr_at_3 value: 80.428 - type: mrr_at_5 value: 81.159 - type: ndcg_at_1 value: 71.707 - type: ndcg_at_10 value: 81.545 - type: ndcg_at_100 value: 82.477 - type: ndcg_at_1000 value: 82.73899999999999 - type: ndcg_at_3 value: 79.292 - type: ndcg_at_5 value: 80.599 - type: precision_at_1 value: 71.707 - type: precision_at_10 value: 10.035 - type: precision_at_100 value: 1.068 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 30.918 - type: precision_at_5 value: 19.328 - type: recall_at_1 value: 66.60000000000001 - type: recall_at_10 value: 91.353 - type: recall_at_100 value: 95.21 - type: recall_at_1000 value: 96.89999999999999 - type: recall_at_3 value: 85.188 - type: recall_at_5 value: 88.52 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 19.338 - type: map_at_10 value: 31.752000000000002 - type: map_at_100 value: 33.516 - type: map_at_1000 value: 33.694 - type: map_at_3 value: 27.716 - type: map_at_5 value: 29.67 - type: mrr_at_1 value: 38.117000000000004 - type: mrr_at_10 value: 47.323 - type: mrr_at_100 value: 48.13 - type: mrr_at_1000 value: 48.161 - type: mrr_at_3 value: 45.062000000000005 - type: mrr_at_5 value: 46.358 - type: ndcg_at_1 value: 38.117000000000004 - type: ndcg_at_10 value: 39.353 - type: ndcg_at_100 value: 46.044000000000004 - type: ndcg_at_1000 value: 49.083 - type: ndcg_at_3 value: 35.891 - type: ndcg_at_5 value: 36.661 - type: precision_at_1 value: 38.117000000000004 - type: precision_at_10 value: 11.187999999999999 - type: precision_at_100 value: 1.802 - type: precision_at_1000 value: 0.234 - type: precision_at_3 value: 24.126 - type: precision_at_5 value: 17.562 - type: recall_at_1 value: 19.338 - type: recall_at_10 value: 45.735 - type: recall_at_100 value: 71.281 - type: recall_at_1000 value: 89.537 - type: recall_at_3 value: 32.525 - type: recall_at_5 value: 37.671 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.995 - type: map_at_10 value: 55.032000000000004 - type: map_at_100 value: 55.86 - type: map_at_1000 value: 55.932 - type: map_at_3 value: 52.125 - type: map_at_5 value: 53.884 - type: mrr_at_1 value: 73.991 - type: mrr_at_10 value: 80.096 - type: mrr_at_100 value: 80.32000000000001 - type: mrr_at_1000 value: 80.331 - type: mrr_at_3 value: 79.037 - type: mrr_at_5 value: 79.719 - type: ndcg_at_1 value: 73.991 - type: ndcg_at_10 value: 63.786 - type: ndcg_at_100 value: 66.78 - type: ndcg_at_1000 value: 68.255 - type: ndcg_at_3 value: 59.501000000000005 - type: ndcg_at_5 value: 61.82299999999999 - type: precision_at_1 value: 73.991 - type: precision_at_10 value: 13.157 - type: precision_at_100 value: 1.552 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 37.519999999999996 - type: precision_at_5 value: 24.351 - type: recall_at_1 value: 36.995 - type: recall_at_10 value: 65.78699999999999 - type: recall_at_100 value: 77.583 - type: recall_at_1000 value: 87.421 - type: recall_at_3 value: 56.279999999999994 - type: recall_at_5 value: 60.878 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 86.80239999999999 - type: ap value: 81.97305141128378 - type: f1 value: 86.76976305549273 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 21.166 - type: map_at_10 value: 33.396 - type: map_at_100 value: 34.588 - type: map_at_1000 value: 34.637 - type: map_at_3 value: 29.509999999999998 - type: map_at_5 value: 31.719 - type: mrr_at_1 value: 21.762 - type: mrr_at_10 value: 33.969 - type: mrr_at_100 value: 35.099000000000004 - type: mrr_at_1000 value: 35.141 - type: mrr_at_3 value: 30.148000000000003 - type: mrr_at_5 value: 32.324000000000005 - type: ndcg_at_1 value: 21.776999999999997 - type: ndcg_at_10 value: 40.306999999999995 - type: ndcg_at_100 value: 46.068 - type: ndcg_at_1000 value: 47.3 - type: ndcg_at_3 value: 32.416 - type: ndcg_at_5 value: 36.345 - type: precision_at_1 value: 21.776999999999997 - type: precision_at_10 value: 6.433 - type: precision_at_100 value: 0.932 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.897 - type: precision_at_5 value: 10.324 - type: recall_at_1 value: 21.166 - type: recall_at_10 value: 61.587 - type: recall_at_100 value: 88.251 - type: recall_at_1000 value: 97.727 - type: recall_at_3 value: 40.196 - type: recall_at_5 value: 49.611 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.04605563155496 - type: f1 value: 92.78007303978372 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 69.65116279069767 - type: f1 value: 52.75775172527262 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.34633490248822 - type: f1 value: 68.15345065392562 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.63887020847343 - type: f1 value: 76.08074680233685 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.77933406071333 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.06504927238196 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.20682480490871 - type: mrr value: 33.41462721527003 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.548 - type: map_at_10 value: 13.086999999999998 - type: map_at_100 value: 16.698 - type: map_at_1000 value: 18.151999999999997 - type: map_at_3 value: 9.576 - type: map_at_5 value: 11.175 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 53.635999999999996 - type: mrr_at_100 value: 54.228 - type: mrr_at_1000 value: 54.26499999999999 - type: mrr_at_3 value: 51.754 - type: mrr_at_5 value: 53.086 - type: ndcg_at_1 value: 42.724000000000004 - type: ndcg_at_10 value: 34.769 - type: ndcg_at_100 value: 32.283 - type: ndcg_at_1000 value: 40.843 - type: ndcg_at_3 value: 39.852 - type: ndcg_at_5 value: 37.858999999999995 - type: precision_at_1 value: 44.272 - type: precision_at_10 value: 26.068 - type: precision_at_100 value: 8.328000000000001 - type: precision_at_1000 value: 2.1 - type: precision_at_3 value: 37.874 - type: precision_at_5 value: 33.065 - type: recall_at_1 value: 5.548 - type: recall_at_10 value: 16.936999999999998 - type: recall_at_100 value: 33.72 - type: recall_at_1000 value: 64.348 - type: recall_at_3 value: 10.764999999999999 - type: recall_at_5 value: 13.361 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 28.008 - type: map_at_10 value: 42.675000000000004 - type: map_at_100 value: 43.85 - type: map_at_1000 value: 43.884 - type: map_at_3 value: 38.286 - type: map_at_5 value: 40.78 - type: mrr_at_1 value: 31.518 - type: mrr_at_10 value: 45.015 - type: mrr_at_100 value: 45.924 - type: mrr_at_1000 value: 45.946999999999996 - type: mrr_at_3 value: 41.348 - type: mrr_at_5 value: 43.428 - type: ndcg_at_1 value: 31.489 - type: ndcg_at_10 value: 50.285999999999994 - type: ndcg_at_100 value: 55.291999999999994 - type: ndcg_at_1000 value: 56.05 - type: ndcg_at_3 value: 41.976 - type: ndcg_at_5 value: 46.103 - type: precision_at_1 value: 31.489 - type: precision_at_10 value: 8.456 - type: precision_at_100 value: 1.125 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 19.09 - type: precision_at_5 value: 13.841000000000001 - type: recall_at_1 value: 28.008 - type: recall_at_10 value: 71.21499999999999 - type: recall_at_100 value: 92.99 - type: recall_at_1000 value: 98.578 - type: recall_at_3 value: 49.604 - type: recall_at_5 value: 59.094 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.351 - type: map_at_10 value: 84.163 - type: map_at_100 value: 84.785 - type: map_at_1000 value: 84.801 - type: map_at_3 value: 81.16 - type: map_at_5 value: 83.031 - type: mrr_at_1 value: 80.96 - type: mrr_at_10 value: 87.241 - type: mrr_at_100 value: 87.346 - type: mrr_at_1000 value: 87.347 - type: mrr_at_3 value: 86.25699999999999 - type: mrr_at_5 value: 86.907 - type: ndcg_at_1 value: 80.97 - type: ndcg_at_10 value: 88.017 - type: ndcg_at_100 value: 89.241 - type: ndcg_at_1000 value: 89.34299999999999 - type: ndcg_at_3 value: 85.053 - type: ndcg_at_5 value: 86.663 - type: precision_at_1 value: 80.97 - type: precision_at_10 value: 13.358 - type: precision_at_100 value: 1.525 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.143 - type: precision_at_5 value: 24.451999999999998 - type: recall_at_1 value: 70.351 - type: recall_at_10 value: 95.39800000000001 - type: recall_at_100 value: 99.55199999999999 - type: recall_at_1000 value: 99.978 - type: recall_at_3 value: 86.913 - type: recall_at_5 value: 91.448 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 55.62406719814139 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 61.386700035141736 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.618 - type: map_at_10 value: 12.920000000000002 - type: map_at_100 value: 15.304 - type: map_at_1000 value: 15.656999999999998 - type: map_at_3 value: 9.187 - type: map_at_5 value: 10.937 - type: mrr_at_1 value: 22.8 - type: mrr_at_10 value: 35.13 - type: mrr_at_100 value: 36.239 - type: mrr_at_1000 value: 36.291000000000004 - type: mrr_at_3 value: 31.917 - type: mrr_at_5 value: 33.787 - type: ndcg_at_1 value: 22.8 - type: ndcg_at_10 value: 21.382 - type: ndcg_at_100 value: 30.257 - type: ndcg_at_1000 value: 36.001 - type: ndcg_at_3 value: 20.43 - type: ndcg_at_5 value: 17.622 - type: precision_at_1 value: 22.8 - type: precision_at_10 value: 11.26 - type: precision_at_100 value: 2.405 - type: precision_at_1000 value: 0.377 - type: precision_at_3 value: 19.633 - type: precision_at_5 value: 15.68 - type: recall_at_1 value: 4.618 - type: recall_at_10 value: 22.811999999999998 - type: recall_at_100 value: 48.787000000000006 - type: recall_at_1000 value: 76.63799999999999 - type: recall_at_3 value: 11.952 - type: recall_at_5 value: 15.892000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.01529458252244 - type: cos_sim_spearman value: 77.92985224770254 - type: euclidean_pearson value: 81.04251429422487 - type: euclidean_spearman value: 77.92838490549133 - type: manhattan_pearson value: 80.95892251458979 - type: manhattan_spearman value: 77.81028089705941 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.97885282534388 - type: cos_sim_spearman value: 75.1221970851712 - type: euclidean_pearson value: 80.34455956720097 - type: euclidean_spearman value: 74.5894274239938 - type: manhattan_pearson value: 80.38999766325465 - type: manhattan_spearman value: 74.68524557166975 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.95746064915672 - type: cos_sim_spearman value: 85.08683458043946 - type: euclidean_pearson value: 84.56699492836385 - type: euclidean_spearman value: 85.66089116133713 - type: manhattan_pearson value: 84.47553323458541 - type: manhattan_spearman value: 85.56142206781472 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.71377893595067 - type: cos_sim_spearman value: 81.03453291428589 - type: euclidean_pearson value: 82.57136298308613 - type: euclidean_spearman value: 81.15839961890875 - type: manhattan_pearson value: 82.55157879373837 - type: manhattan_spearman value: 81.1540163767054 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.64197832372373 - type: cos_sim_spearman value: 88.31966852492485 - type: euclidean_pearson value: 87.98692129976983 - type: euclidean_spearman value: 88.6247340837856 - type: manhattan_pearson value: 87.90437827826412 - type: manhattan_spearman value: 88.56278787131457 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 81.84159950146693 - type: cos_sim_spearman value: 83.90678384140168 - type: euclidean_pearson value: 83.19005018860221 - type: euclidean_spearman value: 84.16260415876295 - type: manhattan_pearson value: 83.05030612994494 - type: manhattan_spearman value: 83.99605629718336 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.49935350176666 - type: cos_sim_spearman value: 87.59086606735383 - type: euclidean_pearson value: 88.06537181129983 - type: euclidean_spearman value: 87.6687448086014 - type: manhattan_pearson value: 87.96599131972935 - type: manhattan_spearman value: 87.63295748969642 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.68232799482763 - type: cos_sim_spearman value: 67.99930378085793 - type: euclidean_pearson value: 68.50275360001696 - type: euclidean_spearman value: 67.81588179309259 - type: manhattan_pearson value: 68.5892154749763 - type: manhattan_spearman value: 67.84357259640682 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.37049618406554 - type: cos_sim_spearman value: 85.57014313159492 - type: euclidean_pearson value: 85.57469513908282 - type: euclidean_spearman value: 85.661948135258 - type: manhattan_pearson value: 85.36866831229028 - type: manhattan_spearman value: 85.5043455368843 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 84.83259065376154 - type: mrr value: 95.58455433455433 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 58.817 - type: map_at_10 value: 68.459 - type: map_at_100 value: 68.951 - type: map_at_1000 value: 68.979 - type: map_at_3 value: 65.791 - type: map_at_5 value: 67.583 - type: mrr_at_1 value: 61.667 - type: mrr_at_10 value: 69.368 - type: mrr_at_100 value: 69.721 - type: mrr_at_1000 value: 69.744 - type: mrr_at_3 value: 67.278 - type: mrr_at_5 value: 68.611 - type: ndcg_at_1 value: 61.667 - type: ndcg_at_10 value: 72.70100000000001 - type: ndcg_at_100 value: 74.928 - type: ndcg_at_1000 value: 75.553 - type: ndcg_at_3 value: 68.203 - type: ndcg_at_5 value: 70.804 - type: precision_at_1 value: 61.667 - type: precision_at_10 value: 9.533 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.444000000000003 - type: precision_at_5 value: 17.599999999999998 - type: recall_at_1 value: 58.817 - type: recall_at_10 value: 84.789 - type: recall_at_100 value: 95.0 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 72.8 - type: recall_at_5 value: 79.294 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8108910891089 - type: cos_sim_ap value: 95.5743678558349 - type: cos_sim_f1 value: 90.43133366385722 - type: cos_sim_precision value: 89.67551622418878 - type: cos_sim_recall value: 91.2 - type: dot_accuracy value: 99.75841584158415 - type: dot_ap value: 94.00786363627253 - type: dot_f1 value: 87.51910341314316 - type: dot_precision value: 89.20041536863967 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.81485148514851 - type: euclidean_ap value: 95.4752113136905 - type: euclidean_f1 value: 90.44334975369456 - type: euclidean_precision value: 89.126213592233 - type: euclidean_recall value: 91.8 - type: manhattan_accuracy value: 99.81584158415842 - type: manhattan_ap value: 95.5163172682464 - type: manhattan_f1 value: 90.51987767584097 - type: manhattan_precision value: 92.3076923076923 - type: manhattan_recall value: 88.8 - type: max_accuracy value: 99.81584158415842 - type: max_ap value: 95.5743678558349 - type: max_f1 value: 90.51987767584097 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 62.63235986949449 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.334795589585575 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.02955214518782 - type: mrr value: 52.8004838298956 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.63769566275453 - type: cos_sim_spearman value: 30.422379185989335 - type: dot_pearson value: 26.88493071882256 - type: dot_spearman value: 26.505249740971305 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.21 - type: map_at_10 value: 1.654 - type: map_at_100 value: 10.095 - type: map_at_1000 value: 25.808999999999997 - type: map_at_3 value: 0.594 - type: map_at_5 value: 0.9289999999999999 - type: mrr_at_1 value: 78.0 - type: mrr_at_10 value: 87.019 - type: mrr_at_100 value: 87.019 - type: mrr_at_1000 value: 87.019 - type: mrr_at_3 value: 86.333 - type: mrr_at_5 value: 86.733 - type: ndcg_at_1 value: 73.0 - type: ndcg_at_10 value: 66.52900000000001 - type: ndcg_at_100 value: 53.433 - type: ndcg_at_1000 value: 51.324000000000005 - type: ndcg_at_3 value: 72.02199999999999 - type: ndcg_at_5 value: 69.696 - type: precision_at_1 value: 78.0 - type: precision_at_10 value: 70.39999999999999 - type: precision_at_100 value: 55.46 - type: precision_at_1000 value: 22.758 - type: precision_at_3 value: 76.667 - type: precision_at_5 value: 74.0 - type: recall_at_1 value: 0.21 - type: recall_at_10 value: 1.8849999999999998 - type: recall_at_100 value: 13.801 - type: recall_at_1000 value: 49.649 - type: recall_at_3 value: 0.632 - type: recall_at_5 value: 1.009 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.797 - type: map_at_10 value: 9.01 - type: map_at_100 value: 14.682 - type: map_at_1000 value: 16.336000000000002 - type: map_at_3 value: 4.546 - type: map_at_5 value: 5.9270000000000005 - type: mrr_at_1 value: 24.490000000000002 - type: mrr_at_10 value: 41.156 - type: mrr_at_100 value: 42.392 - type: mrr_at_1000 value: 42.408 - type: mrr_at_3 value: 38.775999999999996 - type: mrr_at_5 value: 40.102 - type: ndcg_at_1 value: 21.429000000000002 - type: ndcg_at_10 value: 22.222 - type: ndcg_at_100 value: 34.405 - type: ndcg_at_1000 value: 46.599000000000004 - type: ndcg_at_3 value: 25.261 - type: ndcg_at_5 value: 22.695999999999998 - type: precision_at_1 value: 24.490000000000002 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.306 - type: precision_at_1000 value: 1.5350000000000001 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 22.857 - type: recall_at_1 value: 1.797 - type: recall_at_10 value: 15.706000000000001 - type: recall_at_100 value: 46.412 - type: recall_at_1000 value: 83.159 - type: recall_at_3 value: 6.1370000000000005 - type: recall_at_5 value: 8.599 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.3302 - type: ap value: 14.169121204575601 - type: f1 value: 54.229345975274235 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 58.22297679683077 - type: f1 value: 58.62984908377875 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 49.952922428464255 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.68140907194373 - type: cos_sim_ap value: 70.12180123666836 - type: cos_sim_f1 value: 65.77501791258658 - type: cos_sim_precision value: 60.07853403141361 - type: cos_sim_recall value: 72.66490765171504 - type: dot_accuracy value: 81.92167848840674 - type: dot_ap value: 60.49837581423469 - type: dot_f1 value: 58.44186046511628 - type: dot_precision value: 52.24532224532224 - type: dot_recall value: 66.3060686015831 - type: euclidean_accuracy value: 84.73505394289802 - type: euclidean_ap value: 70.3278904593286 - type: euclidean_f1 value: 65.98851124940161 - type: euclidean_precision value: 60.38107752956636 - type: euclidean_recall value: 72.74406332453826 - type: manhattan_accuracy value: 84.73505394289802 - type: manhattan_ap value: 70.00737738537337 - type: manhattan_f1 value: 65.80150784822642 - type: manhattan_precision value: 61.892583120204606 - type: manhattan_recall value: 70.23746701846966 - type: max_accuracy value: 84.73505394289802 - type: max_ap value: 70.3278904593286 - type: max_f1 value: 65.98851124940161 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.44258159661582 - type: cos_sim_ap value: 84.91926704880888 - type: cos_sim_f1 value: 77.07651086632926 - type: cos_sim_precision value: 74.5894554883319 - type: cos_sim_recall value: 79.73514012935017 - type: dot_accuracy value: 85.88116583226608 - type: dot_ap value: 78.9753854779923 - type: dot_f1 value: 72.17757637979255 - type: dot_precision value: 66.80647486729143 - type: dot_recall value: 78.48783492454572 - type: euclidean_accuracy value: 88.5299025885823 - type: euclidean_ap value: 85.08006075642194 - type: euclidean_f1 value: 77.29637336504163 - type: euclidean_precision value: 74.69836253950014 - type: euclidean_recall value: 80.08161379735141 - type: manhattan_accuracy value: 88.55124771995187 - type: manhattan_ap value: 85.00941529932851 - type: manhattan_f1 value: 77.33100233100232 - type: manhattan_precision value: 73.37572573956317 - type: manhattan_recall value: 81.73698798891284 - type: max_accuracy value: 88.55124771995187 - type: max_ap value: 85.08006075642194 - type: max_f1 value: 77.33100233100232 --- # gte-small THIS IS A COPY FROM thenlper/gte-small General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281) The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc. ## Metrics We compared the performance of the GTE models with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 | | [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 | | [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 | | [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 | | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 | | [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 | | [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 | ## Usage Code example ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-small") model = AutoModel.from_pretrained("thenlper/gte-small") # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) ``` Use with sentence-transformers: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['That is a happy person', 'That is a very happy person'] model = SentenceTransformer('thenlper/gte-large') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ### Limitation This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens. ### Citation If you find our paper or models helpful, please consider citing them as follows: ``` @misc{li2023general, title={Towards General Text Embeddings with Multi-stage Contrastive Learning}, author={Zehan Li and Xin Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang}, year={2023}, eprint={2308.03281}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
# gte-small THIS IS A COPY FROM thenlper/gte-small General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281) The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc. ## Metrics We compared the performance of the GTE models with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 | | [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 | | [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 | | [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 | | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 | | [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 | | [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 | ## Usage Code example ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-small") model = AutoModel.from_pretrained("thenlper/gte-small") # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) ``` Use with sentence-transformers: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['That is a happy person', 'That is a very happy person'] model = SentenceTransformer('thenlper/gte-large') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ### Limitation This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens. ### Citation If you find our paper or models helpful, please consider citing them as follows: ``` @misc{li2023general, title={Towards General Text Embeddings with Multi-stage Contrastive Learning}, author={Zehan Li and Xin Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang}, year={2023}, eprint={2308.03281}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": ["en"], "license": "mit", "tags": ["mteb", "sentence-similarity", "sentence-transformers", "Sentence Transformers"], "model-index": [{"name": "gte-small", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 73.22388059701493}, {"type": "ap", "value": 36.09895941426988}, {"type": "f1", "value": 67.3205651539195}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 91.81894999999999}, {"type": "ap", "value": 88.5240138417305}, {"type": "f1", "value": 91.80367382706962}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 48.032}, {"type": "f1", "value": 47.4490665674719}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 30.725}, {"type": "map_at_10", "value": 46.604}, {"type": "map_at_100", "value": 47.535}, {"type": "map_at_1000", "value": 47.538000000000004}, {"type": "map_at_3", "value": 41.833}, {"type": "map_at_5", "value": 44.61}, {"type": "mrr_at_1", "value": 31.223}, {"type": "mrr_at_10", "value": 46.794000000000004}, {"type": "mrr_at_100", "value": 47.725}, {"type": "mrr_at_1000", "value": 47.727000000000004}, {"type": "mrr_at_3", "value": 42.07}, {"type": "mrr_at_5", "value": 44.812000000000005}, {"type": "ndcg_at_1", "value": 30.725}, {"type": "ndcg_at_10", "value": 55.440999999999995}, {"type": "ndcg_at_100", "value": 59.134}, {"type": "ndcg_at_1000", "value": 59.199}, {"type": "ndcg_at_3", "value": 45.599000000000004}, {"type": "ndcg_at_5", "value": 50.637}, {"type": "precision_at_1", "value": 30.725}, {"type": "precision_at_10", "value": 8.364}, {"type": "precision_at_100", "value": 0.991}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 18.848000000000003}, {"type": "precision_at_5", "value": 13.77}, {"type": "recall_at_1", "value": 30.725}, {"type": "recall_at_10", "value": 83.64200000000001}, {"type": "recall_at_100", "value": 99.14699999999999}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 56.543}, {"type": "recall_at_5", "value": 68.848}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 47.90178078197678}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 40.25728393431922}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 61.720297062897764}, {"type": "mrr", "value": 75.24139295607439}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.43527309184616}, {"type": "cos_sim_spearman", "value": 88.17128615100206}, {"type": "euclidean_pearson", "value": 87.89922623089282}, {"type": "euclidean_spearman", "value": 87.96104039655451}, {"type": "manhattan_pearson", "value": 87.9818290932077}, {"type": "manhattan_spearman", "value": 88.00923426576885}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 84.0844155844156}, {"type": "f1", "value": 84.01485017302213}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 38.36574769259432}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 35.4857033165287}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 30.261}, {"type": "map_at_10", "value": 42.419000000000004}, {"type": "map_at_100", "value": 43.927}, {"type": "map_at_1000", "value": 44.055}, {"type": "map_at_3", "value": 38.597}, {"type": "map_at_5", "value": 40.701}, {"type": "mrr_at_1", "value": 36.91}, {"type": "mrr_at_10", "value": 48.02}, {"type": "mrr_at_100", "value": 48.658}, {"type": "mrr_at_1000", "value": 48.708}, {"type": "mrr_at_3", "value": 44.945}, {"type": "mrr_at_5", "value": 46.705000000000005}, {"type": "ndcg_at_1", "value": 36.91}, {"type": "ndcg_at_10", "value": 49.353}, {"type": "ndcg_at_100", "value": 54.456}, {"type": "ndcg_at_1000", "value": 56.363}, {"type": "ndcg_at_3", "value": 43.483}, {"type": "ndcg_at_5", "value": 46.150999999999996}, {"type": "precision_at_1", "value": 36.91}, {"type": "precision_at_10", "value": 9.700000000000001}, {"type": "precision_at_100", "value": 1.557}, {"type": "precision_at_1000", "value": 0.202}, {"type": "precision_at_3", "value": 21.078}, {"type": "precision_at_5", "value": 15.421999999999999}, {"type": "recall_at_1", "value": 30.261}, {"type": "recall_at_10", "value": 63.242}, {"type": "recall_at_100", "value": 84.09100000000001}, {"type": "recall_at_1000", "value": 96.143}, {"type": "recall_at_3", "value": 46.478}, {"type": "recall_at_5", "value": 53.708}, {"type": "map_at_1", "value": 31.145}, {"type": "map_at_10", "value": 40.996}, {"type": "map_at_100", "value": 42.266999999999996}, {"type": "map_at_1000", "value": 42.397}, {"type": "map_at_3", "value": 38.005}, {"type": "map_at_5", "value": 39.628}, {"type": "mrr_at_1", "value": 38.344}, {"type": "mrr_at_10", "value": 46.827000000000005}, {"type": "mrr_at_100", "value": 47.446}, {"type": "mrr_at_1000", "value": 47.489}, {"type": "mrr_at_3", "value": 44.448}, {"type": "mrr_at_5", "value": 45.747}, {"type": "ndcg_at_1", "value": 38.344}, {"type": "ndcg_at_10", "value": 46.733000000000004}, {"type": "ndcg_at_100", "value": 51.103}, {"type": "ndcg_at_1000", "value": 53.075}, {"type": "ndcg_at_3", "value": 42.366}, {"type": "ndcg_at_5", "value": 44.242}, {"type": "precision_at_1", "value": 38.344}, {"type": "precision_at_10", "value": 8.822000000000001}, {"type": "precision_at_100", "value": 1.417}, {"type": "precision_at_1000", "value": 0.187}, {"type": "precision_at_3", "value": 20.403}, {"type": "precision_at_5", "value": 14.306}, {"type": "recall_at_1", "value": 31.145}, {"type": "recall_at_10", "value": 56.909}, {"type": "recall_at_100", "value": 75.274}, {"type": "recall_at_1000", "value": 87.629}, {"type": "recall_at_3", "value": 43.784}, {"type": "recall_at_5", "value": 49.338}, {"type": "map_at_1", "value": 38.83}, {"type": "map_at_10", "value": 51.553000000000004}, {"type": "map_at_100", "value": 52.581}, {"type": "map_at_1000", "value": 52.638}, {"type": "map_at_3", "value": 48.112}, {"type": "map_at_5", "value": 50.095}, {"type": "mrr_at_1", "value": 44.513999999999996}, {"type": "mrr_at_10", "value": 54.998000000000005}, {"type": "mrr_at_100", "value": 55.650999999999996}, {"type": "mrr_at_1000", "value": 55.679}, {"type": "mrr_at_3", "value": 52.602000000000004}, {"type": "mrr_at_5", "value": 53.931}, {"type": "ndcg_at_1", "value": 44.513999999999996}, {"type": "ndcg_at_10", "value": 57.67400000000001}, {"type": "ndcg_at_100", "value": 61.663999999999994}, {"type": "ndcg_at_1000", "value": 62.743}, {"type": "ndcg_at_3", "value": 51.964}, {"type": "ndcg_at_5", "value": 54.773}, {"type": "precision_at_1", "value": 44.513999999999996}, {"type": "precision_at_10", "value": 9.423}, {"type": "precision_at_100", "value": 1.2309999999999999}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 23.323}, {"type": "precision_at_5", "value": 16.163}, {"type": "recall_at_1", "value": 38.83}, {"type": "recall_at_10", "value": 72.327}, {"type": "recall_at_100", "value": 89.519}, {"type": "recall_at_1000", "value": 97.041}, {"type": "recall_at_3", "value": 57.206}, {"type": "recall_at_5", "value": 63.88399999999999}, {"type": "map_at_1", "value": 25.484}, {"type": "map_at_10", "value": 34.527}, {"type": "map_at_100", "value": 35.661}, {"type": "map_at_1000", "value": 35.739}, {"type": "map_at_3", "value": 32.199}, {"type": "map_at_5", "value": 33.632}, {"type": "mrr_at_1", "value": 27.458}, {"type": "mrr_at_10", "value": 36.543}, {"type": "mrr_at_100", "value": 37.482}, {"type": "mrr_at_1000", "value": 37.543}, {"type": "mrr_at_3", "value": 34.256}, {"type": "mrr_at_5", "value": 35.618}, {"type": "ndcg_at_1", "value": 27.458}, {"type": "ndcg_at_10", "value": 39.396}, {"type": "ndcg_at_100", "value": 44.742}, {"type": "ndcg_at_1000", "value": 46.708}, {"type": "ndcg_at_3", "value": 34.817}, {"type": "ndcg_at_5", "value": 37.247}, {"type": "precision_at_1", "value": 27.458}, {"type": "precision_at_10", "value": 5.976999999999999}, {"type": "precision_at_100", "value": 0.907}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 14.878}, {"type": "precision_at_5", "value": 10.35}, {"type": "recall_at_1", "value": 25.484}, {"type": "recall_at_10", "value": 52.317}, {"type": "recall_at_100", "value": 76.701}, {"type": "recall_at_1000", "value": 91.408}, {"type": "recall_at_3", "value": 40.043}, {"type": "recall_at_5", "value": 45.879}, {"type": "map_at_1", "value": 16.719}, {"type": "map_at_10", "value": 25.269000000000002}, {"type": "map_at_100", "value": 26.442}, {"type": "map_at_1000", "value": 26.557}, {"type": "map_at_3", "value": 22.56}, {"type": "map_at_5", "value": 24.082}, {"type": "mrr_at_1", "value": 20.896}, {"type": "mrr_at_10", "value": 29.982999999999997}, {"type": "mrr_at_100", "value": 30.895}, {"type": "mrr_at_1000", "value": 30.961}, {"type": "mrr_at_3", "value": 27.239}, {"type": "mrr_at_5", "value": 28.787000000000003}, {"type": "ndcg_at_1", "value": 20.896}, {"type": "ndcg_at_10", "value": 30.814000000000004}, {"type": "ndcg_at_100", "value": 36.418}, {"type": "ndcg_at_1000", "value": 39.182}, {"type": "ndcg_at_3", "value": 25.807999999999996}, {"type": "ndcg_at_5", "value": 28.143}, {"type": "precision_at_1", "value": 20.896}, {"type": "precision_at_10", "value": 5.821}, {"type": "precision_at_100", "value": 0.991}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 12.562000000000001}, {"type": "precision_at_5", "value": 9.254}, {"type": "recall_at_1", "value": 16.719}, {"type": "recall_at_10", "value": 43.155}, {"type": "recall_at_100", "value": 67.831}, {"type": "recall_at_1000", "value": 87.617}, {"type": "recall_at_3", "value": 29.259}, {"type": "recall_at_5", "value": 35.260999999999996}, {"type": "map_at_1", "value": 29.398999999999997}, {"type": "map_at_10", "value": 39.876}, {"type": "map_at_100", "value": 41.205999999999996}, {"type": "map_at_1000", "value": 41.321999999999996}, {"type": "map_at_3", "value": 36.588}, {"type": "map_at_5", "value": 38.538}, {"type": "mrr_at_1", "value": 35.9}, {"type": "mrr_at_10", "value": 45.528}, {"type": "mrr_at_100", "value": 46.343}, {"type": "mrr_at_1000", "value": 46.388}, {"type": "mrr_at_3", "value": 42.862}, {"type": "mrr_at_5", "value": 44.440000000000005}, {"type": "ndcg_at_1", "value": 35.9}, {"type": "ndcg_at_10", "value": 45.987}, {"type": "ndcg_at_100", "value": 51.370000000000005}, {"type": "ndcg_at_1000", "value": 53.400000000000006}, {"type": "ndcg_at_3", "value": 40.841}, {"type": "ndcg_at_5", "value": 43.447}, {"type": "precision_at_1", "value": 35.9}, {"type": "precision_at_10", "value": 8.393}, {"type": "precision_at_100", "value": 1.283}, {"type": "precision_at_1000", "value": 0.166}, {"type": "precision_at_3", "value": 19.538}, {"type": "precision_at_5", "value": 13.975000000000001}, {"type": "recall_at_1", "value": 29.398999999999997}, {"type": "recall_at_10", "value": 58.361}, {"type": "recall_at_100", "value": 81.081}, {"type": "recall_at_1000", "value": 94.004}, {"type": "recall_at_3", "value": 43.657000000000004}, {"type": "recall_at_5", "value": 50.519999999999996}, {"type": "map_at_1", "value": 21.589}, {"type": "map_at_10", "value": 31.608999999999998}, {"type": "map_at_100", "value": 33.128}, {"type": "map_at_1000", "value": 33.247}, {"type": "map_at_3", "value": 28.671999999999997}, {"type": "map_at_5", "value": 30.233999999999998}, {"type": "mrr_at_1", "value": 26.712000000000003}, {"type": "mrr_at_10", "value": 36.713}, {"type": "mrr_at_100", "value": 37.713}, {"type": "mrr_at_1000", "value": 37.771}, {"type": "mrr_at_3", "value": 34.075}, {"type": "mrr_at_5", "value": 35.451}, {"type": "ndcg_at_1", "value": 26.712000000000003}, {"type": "ndcg_at_10", "value": 37.519999999999996}, {"type": "ndcg_at_100", "value": 43.946000000000005}, {"type": "ndcg_at_1000", "value": 46.297}, {"type": "ndcg_at_3", "value": 32.551}, {"type": "ndcg_at_5", "value": 34.660999999999994}, {"type": "precision_at_1", "value": 26.712000000000003}, {"type": "precision_at_10", "value": 7.066}, {"type": "precision_at_100", "value": 1.216}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 15.906}, {"type": "precision_at_5", "value": 11.437999999999999}, {"type": "recall_at_1", "value": 21.589}, {"type": "recall_at_10", "value": 50.090999999999994}, {"type": "recall_at_100", "value": 77.43900000000001}, {"type": "recall_at_1000", "value": 93.35900000000001}, {"type": "recall_at_3", "value": 36.028999999999996}, {"type": "recall_at_5", "value": 41.698}, {"type": "map_at_1", "value": 25.121666666666663}, {"type": "map_at_10", "value": 34.46258333333334}, {"type": "map_at_100", "value": 35.710499999999996}, {"type": "map_at_1000", "value": 35.82691666666666}, {"type": "map_at_3", "value": 31.563249999999996}, {"type": "map_at_5", "value": 33.189750000000004}, {"type": "mrr_at_1", "value": 29.66441666666667}, {"type": "mrr_at_10", "value": 38.5455}, {"type": "mrr_at_100", "value": 39.39566666666667}, {"type": "mrr_at_1000", "value": 39.45325}, {"type": "mrr_at_3", "value": 36.003333333333345}, {"type": "mrr_at_5", "value": 37.440916666666666}, {"type": "ndcg_at_1", "value": 29.66441666666667}, {"type": "ndcg_at_10", "value": 39.978416666666675}, {"type": "ndcg_at_100", "value": 45.278666666666666}, {"type": "ndcg_at_1000", "value": 47.52275}, {"type": "ndcg_at_3", "value": 35.00058333333334}, {"type": "ndcg_at_5", "value": 37.34908333333333}, {"type": "precision_at_1", "value": 29.66441666666667}, {"type": "precision_at_10", "value": 7.094500000000001}, {"type": "precision_at_100", "value": 1.1523333333333332}, {"type": "precision_at_1000", "value": 0.15358333333333332}, {"type": "precision_at_3", "value": 16.184166666666663}, {"type": "precision_at_5", "value": 11.6005}, {"type": "recall_at_1", "value": 25.121666666666663}, {"type": "recall_at_10", "value": 52.23975000000001}, {"type": "recall_at_100", "value": 75.48408333333333}, {"type": "recall_at_1000", "value": 90.95316666666668}, {"type": "recall_at_3", "value": 38.38458333333333}, {"type": "recall_at_5", "value": 44.39933333333333}, {"type": "map_at_1", "value": 23.569000000000003}, {"type": "map_at_10", "value": 30.389}, {"type": "map_at_100", "value": 31.396}, {"type": "map_at_1000", "value": 31.493}, {"type": "map_at_3", "value": 28.276}, {"type": "map_at_5", "value": 29.459000000000003}, {"type": "mrr_at_1", "value": 26.534000000000002}, {"type": "mrr_at_10", "value": 33.217999999999996}, {"type": "mrr_at_100", "value": 34.054}, {"type": "mrr_at_1000", "value": 34.12}, {"type": "mrr_at_3", "value": 31.058000000000003}, {"type": "mrr_at_5", "value": 32.330999999999996}, {"type": "ndcg_at_1", "value": 26.534000000000002}, {"type": "ndcg_at_10", "value": 34.608}, {"type": "ndcg_at_100", "value": 39.391999999999996}, {"type": "ndcg_at_1000", "value": 41.837999999999994}, {"type": "ndcg_at_3", "value": 30.564999999999998}, {"type": "ndcg_at_5", "value": 32.509}, {"type": "precision_at_1", "value": 26.534000000000002}, {"type": "precision_at_10", "value": 5.414}, {"type": "precision_at_100", "value": 0.847}, {"type": "precision_at_1000", "value": 0.11399999999999999}, {"type": "precision_at_3", "value": 12.986}, {"type": "precision_at_5", "value": 9.202}, {"type": "recall_at_1", "value": 23.569000000000003}, {"type": "recall_at_10", "value": 44.896}, {"type": "recall_at_100", "value": 66.476}, {"type": "recall_at_1000", "value": 84.548}, {"type": "recall_at_3", "value": 33.79}, {"type": "recall_at_5", "value": 38.512}, {"type": "map_at_1", "value": 16.36}, {"type": "map_at_10", "value": 23.57}, {"type": "map_at_100", "value": 24.698999999999998}, {"type": "map_at_1000", "value": 24.834999999999997}, {"type": "map_at_3", "value": 21.093}, {"type": "map_at_5", "value": 22.418}, {"type": "mrr_at_1", "value": 19.718}, {"type": "mrr_at_10", "value": 27.139999999999997}, {"type": "mrr_at_100", "value": 28.097}, {"type": "mrr_at_1000", "value": 28.177999999999997}, {"type": "mrr_at_3", "value": 24.805}, {"type": "mrr_at_5", "value": 26.121}, {"type": "ndcg_at_1", "value": 19.718}, {"type": "ndcg_at_10", "value": 28.238999999999997}, {"type": "ndcg_at_100", "value": 33.663}, {"type": "ndcg_at_1000", "value": 36.763}, {"type": "ndcg_at_3", "value": 23.747}, {"type": "ndcg_at_5", "value": 25.796000000000003}, {"type": "precision_at_1", "value": 19.718}, {"type": "precision_at_10", "value": 5.282}, {"type": "precision_at_100", "value": 0.9390000000000001}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_3", "value": 11.264000000000001}, {"type": "precision_at_5", "value": 8.341}, {"type": "recall_at_1", "value": 16.36}, {"type": "recall_at_10", "value": 38.669}, {"type": "recall_at_100", "value": 63.184}, {"type": "recall_at_1000", "value": 85.33800000000001}, {"type": "recall_at_3", "value": 26.214}, {"type": "recall_at_5", "value": 31.423000000000002}, {"type": "map_at_1", "value": 25.618999999999996}, {"type": "map_at_10", "value": 34.361999999999995}, {"type": "map_at_100", "value": 35.534}, {"type": "map_at_1000", "value": 35.634}, {"type": "map_at_3", "value": 31.402}, {"type": "map_at_5", "value": 32.815}, {"type": "mrr_at_1", "value": 30.037000000000003}, {"type": "mrr_at_10", "value": 38.284}, {"type": "mrr_at_100", "value": 39.141999999999996}, {"type": "mrr_at_1000", "value": 39.2}, {"type": "mrr_at_3", "value": 35.603}, {"type": "mrr_at_5", "value": 36.867}, {"type": "ndcg_at_1", "value": 30.037000000000003}, {"type": "ndcg_at_10", "value": 39.87}, {"type": "ndcg_at_100", "value": 45.243}, {"type": "ndcg_at_1000", "value": 47.507}, {"type": "ndcg_at_3", "value": 34.371}, {"type": "ndcg_at_5", "value": 36.521}, {"type": "precision_at_1", "value": 30.037000000000003}, {"type": "precision_at_10", "value": 6.819}, {"type": "precision_at_100", "value": 1.0699999999999998}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 15.392}, {"type": "precision_at_5", "value": 10.821}, {"type": "recall_at_1", "value": 25.618999999999996}, {"type": "recall_at_10", "value": 52.869}, {"type": "recall_at_100", "value": 76.395}, {"type": "recall_at_1000", "value": 92.19500000000001}, {"type": "recall_at_3", "value": 37.943}, {"type": "recall_at_5", "value": 43.342999999999996}, {"type": "map_at_1", "value": 23.283}, {"type": "map_at_10", "value": 32.155}, {"type": "map_at_100", "value": 33.724}, {"type": "map_at_1000", "value": 33.939}, {"type": "map_at_3", "value": 29.018}, {"type": "map_at_5", "value": 30.864000000000004}, {"type": "mrr_at_1", "value": 28.063}, {"type": "mrr_at_10", "value": 36.632}, {"type": "mrr_at_100", "value": 37.606}, {"type": "mrr_at_1000", "value": 37.671}, {"type": "mrr_at_3", "value": 33.992}, {"type": "mrr_at_5", "value": 35.613}, {"type": "ndcg_at_1", "value": 28.063}, {"type": "ndcg_at_10", "value": 38.024}, {"type": "ndcg_at_100", "value": 44.292}, {"type": "ndcg_at_1000", "value": 46.818}, {"type": "ndcg_at_3", "value": 32.965}, {"type": "ndcg_at_5", "value": 35.562}, {"type": "precision_at_1", "value": 28.063}, {"type": "precision_at_10", "value": 7.352}, {"type": "precision_at_100", "value": 1.514}, {"type": "precision_at_1000", "value": 0.23800000000000002}, {"type": "precision_at_3", "value": 15.481}, {"type": "precision_at_5", "value": 11.542}, {"type": "recall_at_1", "value": 23.283}, {"type": "recall_at_10", "value": 49.756}, {"type": "recall_at_100", "value": 78.05}, {"type": "recall_at_1000", "value": 93.854}, {"type": "recall_at_3", "value": 35.408}, {"type": "recall_at_5", "value": 42.187000000000005}, {"type": "map_at_1", "value": 19.201999999999998}, {"type": "map_at_10", "value": 26.826}, {"type": "map_at_100", "value": 27.961000000000002}, {"type": "map_at_1000", "value": 28.066999999999997}, {"type": "map_at_3", "value": 24.237000000000002}, {"type": "map_at_5", "value": 25.811}, {"type": "mrr_at_1", "value": 20.887}, {"type": "mrr_at_10", "value": 28.660000000000004}, {"type": "mrr_at_100", "value": 29.660999999999998}, {"type": "mrr_at_1000", "value": 29.731}, {"type": "mrr_at_3", "value": 26.155}, {"type": "mrr_at_5", "value": 27.68}, {"type": "ndcg_at_1", "value": 20.887}, {"type": "ndcg_at_10", "value": 31.523}, {"type": "ndcg_at_100", "value": 37.055}, {"type": "ndcg_at_1000", "value": 39.579}, {"type": "ndcg_at_3", "value": 26.529000000000003}, {"type": "ndcg_at_5", "value": 29.137}, {"type": "precision_at_1", "value": 20.887}, {"type": "precision_at_10", "value": 5.065}, {"type": "precision_at_100", "value": 0.856}, {"type": "precision_at_1000", "value": 0.11900000000000001}, {"type": "precision_at_3", "value": 11.399}, {"type": "precision_at_5", "value": 8.392}, {"type": "recall_at_1", "value": 19.201999999999998}, {"type": "recall_at_10", "value": 44.285000000000004}, {"type": "recall_at_100", "value": 69.768}, {"type": "recall_at_1000", "value": 88.302}, {"type": "recall_at_3", "value": 30.804}, {"type": "recall_at_5", "value": 37.039}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 11.244}, {"type": "map_at_10", "value": 18.956}, {"type": "map_at_100", "value": 20.674}, {"type": "map_at_1000", "value": 20.863}, {"type": "map_at_3", "value": 15.923000000000002}, {"type": "map_at_5", "value": 17.518}, {"type": "mrr_at_1", "value": 25.080999999999996}, {"type": "mrr_at_10", "value": 35.94}, {"type": "mrr_at_100", "value": 36.969}, {"type": "mrr_at_1000", "value": 37.013}, {"type": "mrr_at_3", "value": 32.617000000000004}, {"type": "mrr_at_5", "value": 34.682}, {"type": "ndcg_at_1", "value": 25.080999999999996}, {"type": "ndcg_at_10", "value": 26.539}, {"type": "ndcg_at_100", "value": 33.601}, {"type": "ndcg_at_1000", "value": 37.203}, {"type": "ndcg_at_3", "value": 21.695999999999998}, {"type": "ndcg_at_5", "value": 23.567}, {"type": "precision_at_1", "value": 25.080999999999996}, {"type": "precision_at_10", "value": 8.143}, {"type": "precision_at_100", "value": 1.5650000000000002}, {"type": "precision_at_1000", "value": 0.22300000000000003}, {"type": "precision_at_3", "value": 15.983}, {"type": "precision_at_5", "value": 12.417}, {"type": "recall_at_1", "value": 11.244}, {"type": "recall_at_10", "value": 31.457}, {"type": "recall_at_100", "value": 55.92}, {"type": "recall_at_1000", "value": 76.372}, {"type": "recall_at_3", "value": 19.784}, {"type": "recall_at_5", "value": 24.857000000000003}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 8.595}, {"type": "map_at_10", "value": 18.75}, {"type": "map_at_100", "value": 26.354}, {"type": "map_at_1000", "value": 27.912}, {"type": "map_at_3", "value": 13.794}, {"type": "map_at_5", "value": 16.021}, {"type": "mrr_at_1", "value": 65.75}, {"type": "mrr_at_10", "value": 73.837}, {"type": "mrr_at_100", "value": 74.22800000000001}, {"type": "mrr_at_1000", "value": 74.234}, {"type": "mrr_at_3", "value": 72.5}, {"type": "mrr_at_5", "value": 73.387}, {"type": "ndcg_at_1", "value": 52.625}, {"type": "ndcg_at_10", "value": 39.101}, {"type": "ndcg_at_100", "value": 43.836000000000006}, {"type": "ndcg_at_1000", "value": 51.086}, {"type": "ndcg_at_3", "value": 44.229}, {"type": "ndcg_at_5", "value": 41.555}, {"type": "precision_at_1", "value": 65.75}, {"type": "precision_at_10", "value": 30.45}, {"type": "precision_at_100", "value": 9.81}, {"type": "precision_at_1000", "value": 2.045}, {"type": "precision_at_3", "value": 48.667}, {"type": "precision_at_5", "value": 40.8}, {"type": "recall_at_1", "value": 8.595}, {"type": "recall_at_10", "value": 24.201}, {"type": "recall_at_100", "value": 50.096}, {"type": "recall_at_1000", "value": 72.677}, {"type": "recall_at_3", "value": 15.212}, {"type": "recall_at_5", "value": 18.745}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 46.565}, {"type": "f1", "value": 41.49914329345582}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 66.60000000000001}, {"type": "map_at_10", "value": 76.838}, {"type": "map_at_100", "value": 77.076}, {"type": "map_at_1000", "value": 77.09}, {"type": "map_at_3", "value": 75.545}, {"type": "map_at_5", "value": 76.39}, {"type": "mrr_at_1", "value": 71.707}, {"type": "mrr_at_10", "value": 81.514}, {"type": "mrr_at_100", "value": 81.64099999999999}, {"type": "mrr_at_1000", "value": 81.645}, {"type": "mrr_at_3", "value": 80.428}, {"type": "mrr_at_5", "value": 81.159}, {"type": "ndcg_at_1", "value": 71.707}, {"type": "ndcg_at_10", "value": 81.545}, {"type": "ndcg_at_100", "value": 82.477}, {"type": "ndcg_at_1000", "value": 82.73899999999999}, {"type": "ndcg_at_3", "value": 79.292}, {"type": "ndcg_at_5", "value": 80.599}, {"type": "precision_at_1", "value": 71.707}, {"type": "precision_at_10", "value": 10.035}, {"type": "precision_at_100", "value": 1.068}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 30.918}, {"type": "precision_at_5", "value": 19.328}, {"type": "recall_at_1", "value": 66.60000000000001}, {"type": "recall_at_10", "value": 91.353}, {"type": "recall_at_100", "value": 95.21}, {"type": "recall_at_1000", "value": 96.89999999999999}, {"type": "recall_at_3", "value": 85.188}, {"type": "recall_at_5", "value": 88.52}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.338}, {"type": "map_at_10", "value": 31.752000000000002}, {"type": "map_at_100", "value": 33.516}, {"type": "map_at_1000", "value": 33.694}, {"type": "map_at_3", "value": 27.716}, {"type": "map_at_5", "value": 29.67}, {"type": "mrr_at_1", "value": 38.117000000000004}, {"type": "mrr_at_10", "value": 47.323}, {"type": "mrr_at_100", "value": 48.13}, {"type": "mrr_at_1000", "value": 48.161}, {"type": "mrr_at_3", "value": 45.062000000000005}, {"type": "mrr_at_5", "value": 46.358}, {"type": "ndcg_at_1", "value": 38.117000000000004}, {"type": "ndcg_at_10", "value": 39.353}, {"type": "ndcg_at_100", "value": 46.044000000000004}, {"type": "ndcg_at_1000", "value": 49.083}, {"type": "ndcg_at_3", "value": 35.891}, {"type": "ndcg_at_5", "value": 36.661}, {"type": "precision_at_1", "value": 38.117000000000004}, {"type": "precision_at_10", "value": 11.187999999999999}, {"type": "precision_at_100", "value": 1.802}, {"type": "precision_at_1000", "value": 0.234}, {"type": "precision_at_3", "value": 24.126}, {"type": "precision_at_5", "value": 17.562}, {"type": "recall_at_1", "value": 19.338}, {"type": "recall_at_10", "value": 45.735}, {"type": "recall_at_100", "value": 71.281}, {"type": "recall_at_1000", "value": 89.537}, {"type": "recall_at_3", "value": 32.525}, {"type": "recall_at_5", "value": 37.671}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 36.995}, {"type": "map_at_10", "value": 55.032000000000004}, {"type": "map_at_100", "value": 55.86}, {"type": "map_at_1000", "value": 55.932}, {"type": "map_at_3", "value": 52.125}, {"type": "map_at_5", "value": 53.884}, {"type": "mrr_at_1", "value": 73.991}, {"type": "mrr_at_10", "value": 80.096}, {"type": "mrr_at_100", "value": 80.32000000000001}, {"type": "mrr_at_1000", "value": 80.331}, {"type": "mrr_at_3", "value": 79.037}, {"type": "mrr_at_5", "value": 79.719}, {"type": "ndcg_at_1", "value": 73.991}, {"type": "ndcg_at_10", "value": 63.786}, {"type": "ndcg_at_100", "value": 66.78}, {"type": "ndcg_at_1000", "value": 68.255}, {"type": "ndcg_at_3", "value": 59.501000000000005}, {"type": "ndcg_at_5", "value": 61.82299999999999}, {"type": "precision_at_1", "value": 73.991}, {"type": "precision_at_10", "value": 13.157}, {"type": "precision_at_100", "value": 1.552}, {"type": "precision_at_1000", "value": 0.17500000000000002}, {"type": "precision_at_3", "value": 37.519999999999996}, {"type": "precision_at_5", "value": 24.351}, {"type": "recall_at_1", "value": 36.995}, {"type": "recall_at_10", "value": 65.78699999999999}, {"type": "recall_at_100", "value": 77.583}, {"type": "recall_at_1000", "value": 87.421}, {"type": "recall_at_3", "value": 56.279999999999994}, {"type": "recall_at_5", "value": 60.878}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 86.80239999999999}, {"type": "ap", "value": 81.97305141128378}, {"type": "f1", "value": 86.76976305549273}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.166}, {"type": "map_at_10", "value": 33.396}, {"type": "map_at_100", "value": 34.588}, {"type": "map_at_1000", "value": 34.637}, {"type": "map_at_3", "value": 29.509999999999998}, {"type": "map_at_5", "value": 31.719}, {"type": "mrr_at_1", "value": 21.762}, {"type": "mrr_at_10", "value": 33.969}, {"type": "mrr_at_100", "value": 35.099000000000004}, {"type": "mrr_at_1000", "value": 35.141}, {"type": "mrr_at_3", "value": 30.148000000000003}, {"type": "mrr_at_5", "value": 32.324000000000005}, {"type": "ndcg_at_1", "value": 21.776999999999997}, {"type": "ndcg_at_10", "value": 40.306999999999995}, {"type": "ndcg_at_100", "value": 46.068}, {"type": "ndcg_at_1000", "value": 47.3}, {"type": "ndcg_at_3", "value": 32.416}, {"type": "ndcg_at_5", "value": 36.345}, {"type": "precision_at_1", "value": 21.776999999999997}, {"type": "precision_at_10", "value": 6.433}, {"type": "precision_at_100", "value": 0.932}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 13.897}, {"type": "precision_at_5", "value": 10.324}, {"type": "recall_at_1", "value": 21.166}, {"type": "recall_at_10", "value": 61.587}, {"type": "recall_at_100", "value": 88.251}, {"type": "recall_at_1000", "value": 97.727}, {"type": "recall_at_3", "value": 40.196}, {"type": "recall_at_5", "value": 49.611}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 93.04605563155496}, {"type": "f1", "value": 92.78007303978372}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 69.65116279069767}, {"type": "f1", "value": 52.75775172527262}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 70.34633490248822}, {"type": "f1", "value": 68.15345065392562}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 75.63887020847343}, {"type": "f1", "value": 76.08074680233685}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 33.77933406071333}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 32.06504927238196}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 32.20682480490871}, {"type": "mrr", "value": 33.41462721527003}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.548}, {"type": "map_at_10", "value": 13.086999999999998}, {"type": "map_at_100", "value": 16.698}, {"type": "map_at_1000", "value": 18.151999999999997}, {"type": "map_at_3", "value": 9.576}, {"type": "map_at_5", "value": 11.175}, {"type": "mrr_at_1", "value": 44.272}, {"type": "mrr_at_10", "value": 53.635999999999996}, {"type": "mrr_at_100", "value": 54.228}, {"type": "mrr_at_1000", "value": 54.26499999999999}, {"type": "mrr_at_3", "value": 51.754}, {"type": "mrr_at_5", "value": 53.086}, {"type": "ndcg_at_1", "value": 42.724000000000004}, {"type": "ndcg_at_10", "value": 34.769}, {"type": "ndcg_at_100", "value": 32.283}, {"type": "ndcg_at_1000", "value": 40.843}, {"type": "ndcg_at_3", "value": 39.852}, {"type": "ndcg_at_5", "value": 37.858999999999995}, {"type": "precision_at_1", "value": 44.272}, {"type": "precision_at_10", "value": 26.068}, {"type": "precision_at_100", "value": 8.328000000000001}, {"type": "precision_at_1000", "value": 2.1}, {"type": "precision_at_3", "value": 37.874}, {"type": "precision_at_5", "value": 33.065}, {"type": "recall_at_1", "value": 5.548}, {"type": "recall_at_10", "value": 16.936999999999998}, {"type": "recall_at_100", "value": 33.72}, {"type": "recall_at_1000", "value": 64.348}, {"type": "recall_at_3", "value": 10.764999999999999}, {"type": "recall_at_5", "value": 13.361}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 28.008}, {"type": "map_at_10", "value": 42.675000000000004}, {"type": "map_at_100", "value": 43.85}, {"type": "map_at_1000", "value": 43.884}, {"type": "map_at_3", "value": 38.286}, {"type": "map_at_5", "value": 40.78}, {"type": "mrr_at_1", "value": 31.518}, {"type": "mrr_at_10", "value": 45.015}, {"type": "mrr_at_100", "value": 45.924}, {"type": "mrr_at_1000", "value": 45.946999999999996}, {"type": "mrr_at_3", "value": 41.348}, {"type": "mrr_at_5", "value": 43.428}, {"type": "ndcg_at_1", "value": 31.489}, {"type": "ndcg_at_10", "value": 50.285999999999994}, {"type": "ndcg_at_100", "value": 55.291999999999994}, {"type": "ndcg_at_1000", "value": 56.05}, {"type": "ndcg_at_3", "value": 41.976}, {"type": "ndcg_at_5", "value": 46.103}, {"type": "precision_at_1", "value": 31.489}, {"type": "precision_at_10", "value": 8.456}, {"type": "precision_at_100", "value": 1.125}, {"type": "precision_at_1000", "value": 0.12}, {"type": "precision_at_3", "value": 19.09}, {"type": "precision_at_5", "value": 13.841000000000001}, {"type": "recall_at_1", "value": 28.008}, {"type": "recall_at_10", "value": 71.21499999999999}, {"type": "recall_at_100", "value": 92.99}, {"type": "recall_at_1000", "value": 98.578}, {"type": "recall_at_3", "value": 49.604}, {"type": "recall_at_5", "value": 59.094}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.351}, {"type": "map_at_10", "value": 84.163}, {"type": "map_at_100", "value": 84.785}, {"type": "map_at_1000", "value": 84.801}, {"type": "map_at_3", "value": 81.16}, {"type": "map_at_5", "value": 83.031}, {"type": "mrr_at_1", "value": 80.96}, {"type": "mrr_at_10", "value": 87.241}, {"type": "mrr_at_100", "value": 87.346}, {"type": "mrr_at_1000", "value": 87.347}, {"type": "mrr_at_3", "value": 86.25699999999999}, {"type": "mrr_at_5", "value": 86.907}, {"type": "ndcg_at_1", "value": 80.97}, {"type": "ndcg_at_10", "value": 88.017}, {"type": "ndcg_at_100", "value": 89.241}, {"type": "ndcg_at_1000", "value": 89.34299999999999}, {"type": "ndcg_at_3", "value": 85.053}, {"type": "ndcg_at_5", "value": 86.663}, {"type": "precision_at_1", "value": 80.97}, {"type": "precision_at_10", "value": 13.358}, {"type": "precision_at_100", "value": 1.525}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.143}, {"type": "precision_at_5", "value": 24.451999999999998}, {"type": "recall_at_1", "value": 70.351}, {"type": "recall_at_10", "value": 95.39800000000001}, {"type": "recall_at_100", "value": 99.55199999999999}, {"type": "recall_at_1000", "value": 99.978}, {"type": "recall_at_3", "value": 86.913}, {"type": "recall_at_5", "value": 91.448}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 55.62406719814139}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 61.386700035141736}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.618}, {"type": "map_at_10", "value": 12.920000000000002}, {"type": "map_at_100", "value": 15.304}, {"type": "map_at_1000", "value": 15.656999999999998}, {"type": "map_at_3", "value": 9.187}, {"type": "map_at_5", "value": 10.937}, {"type": "mrr_at_1", "value": 22.8}, {"type": "mrr_at_10", "value": 35.13}, {"type": "mrr_at_100", "value": 36.239}, {"type": "mrr_at_1000", "value": 36.291000000000004}, {"type": "mrr_at_3", "value": 31.917}, {"type": "mrr_at_5", "value": 33.787}, {"type": "ndcg_at_1", "value": 22.8}, {"type": "ndcg_at_10", "value": 21.382}, {"type": "ndcg_at_100", "value": 30.257}, {"type": "ndcg_at_1000", "value": 36.001}, {"type": "ndcg_at_3", "value": 20.43}, {"type": "ndcg_at_5", "value": 17.622}, {"type": "precision_at_1", "value": 22.8}, {"type": "precision_at_10", "value": 11.26}, {"type": "precision_at_100", "value": 2.405}, {"type": "precision_at_1000", "value": 0.377}, {"type": "precision_at_3", "value": 19.633}, {"type": "precision_at_5", "value": 15.68}, {"type": "recall_at_1", "value": 4.618}, {"type": "recall_at_10", "value": 22.811999999999998}, {"type": "recall_at_100", "value": 48.787000000000006}, {"type": "recall_at_1000", "value": 76.63799999999999}, {"type": "recall_at_3", "value": 11.952}, {"type": "recall_at_5", "value": 15.892000000000001}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.01529458252244}, {"type": "cos_sim_spearman", "value": 77.92985224770254}, {"type": "euclidean_pearson", "value": 81.04251429422487}, {"type": "euclidean_spearman", "value": 77.92838490549133}, {"type": "manhattan_pearson", "value": 80.95892251458979}, {"type": "manhattan_spearman", "value": 77.81028089705941}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.97885282534388}, {"type": "cos_sim_spearman", "value": 75.1221970851712}, {"type": "euclidean_pearson", "value": 80.34455956720097}, {"type": "euclidean_spearman", "value": 74.5894274239938}, {"type": "manhattan_pearson", "value": 80.38999766325465}, {"type": "manhattan_spearman", "value": 74.68524557166975}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.95746064915672}, {"type": "cos_sim_spearman", "value": 85.08683458043946}, {"type": "euclidean_pearson", "value": 84.56699492836385}, {"type": "euclidean_spearman", "value": 85.66089116133713}, {"type": "manhattan_pearson", "value": 84.47553323458541}, {"type": "manhattan_spearman", "value": 85.56142206781472}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.71377893595067}, {"type": "cos_sim_spearman", "value": 81.03453291428589}, {"type": "euclidean_pearson", "value": 82.57136298308613}, {"type": "euclidean_spearman", "value": 81.15839961890875}, {"type": "manhattan_pearson", "value": 82.55157879373837}, {"type": "manhattan_spearman", "value": 81.1540163767054}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.64197832372373}, {"type": "cos_sim_spearman", "value": 88.31966852492485}, {"type": "euclidean_pearson", "value": 87.98692129976983}, {"type": "euclidean_spearman", "value": 88.6247340837856}, {"type": "manhattan_pearson", "value": 87.90437827826412}, {"type": "manhattan_spearman", "value": 88.56278787131457}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.84159950146693}, {"type": "cos_sim_spearman", "value": 83.90678384140168}, {"type": "euclidean_pearson", "value": 83.19005018860221}, {"type": "euclidean_spearman", "value": 84.16260415876295}, {"type": "manhattan_pearson", "value": 83.05030612994494}, {"type": "manhattan_spearman", "value": 83.99605629718336}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.49935350176666}, {"type": "cos_sim_spearman", "value": 87.59086606735383}, {"type": "euclidean_pearson", "value": 88.06537181129983}, {"type": "euclidean_spearman", "value": 87.6687448086014}, {"type": "manhattan_pearson", "value": 87.96599131972935}, {"type": "manhattan_spearman", "value": 87.63295748969642}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 67.68232799482763}, {"type": "cos_sim_spearman", "value": 67.99930378085793}, {"type": "euclidean_pearson", "value": 68.50275360001696}, {"type": "euclidean_spearman", "value": 67.81588179309259}, {"type": "manhattan_pearson", "value": 68.5892154749763}, {"type": "manhattan_spearman", "value": 67.84357259640682}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.37049618406554}, {"type": "cos_sim_spearman", "value": 85.57014313159492}, {"type": "euclidean_pearson", "value": 85.57469513908282}, {"type": "euclidean_spearman", "value": 85.661948135258}, {"type": "manhattan_pearson", "value": 85.36866831229028}, {"type": "manhattan_spearman", "value": 85.5043455368843}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 84.83259065376154}, {"type": "mrr", "value": 95.58455433455433}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 58.817}, {"type": "map_at_10", "value": 68.459}, {"type": "map_at_100", "value": 68.951}, {"type": "map_at_1000", "value": 68.979}, {"type": "map_at_3", "value": 65.791}, {"type": "map_at_5", "value": 67.583}, {"type": "mrr_at_1", "value": 61.667}, {"type": "mrr_at_10", "value": 69.368}, {"type": "mrr_at_100", "value": 69.721}, {"type": "mrr_at_1000", "value": 69.744}, {"type": "mrr_at_3", "value": 67.278}, {"type": "mrr_at_5", "value": 68.611}, {"type": "ndcg_at_1", "value": 61.667}, {"type": "ndcg_at_10", "value": 72.70100000000001}, {"type": "ndcg_at_100", "value": 74.928}, {"type": "ndcg_at_1000", "value": 75.553}, {"type": "ndcg_at_3", "value": 68.203}, {"type": "ndcg_at_5", "value": 70.804}, {"type": "precision_at_1", "value": 61.667}, {"type": "precision_at_10", "value": 9.533}, {"type": "precision_at_100", "value": 1.077}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 26.444000000000003}, {"type": "precision_at_5", "value": 17.599999999999998}, {"type": "recall_at_1", "value": 58.817}, {"type": "recall_at_10", "value": 84.789}, {"type": "recall_at_100", "value": 95.0}, {"type": "recall_at_1000", "value": 99.667}, {"type": "recall_at_3", "value": 72.8}, {"type": "recall_at_5", "value": 79.294}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.8108910891089}, {"type": "cos_sim_ap", "value": 95.5743678558349}, {"type": "cos_sim_f1", "value": 90.43133366385722}, {"type": "cos_sim_precision", "value": 89.67551622418878}, {"type": "cos_sim_recall", "value": 91.2}, {"type": "dot_accuracy", "value": 99.75841584158415}, {"type": "dot_ap", "value": 94.00786363627253}, {"type": "dot_f1", "value": 87.51910341314316}, {"type": "dot_precision", "value": 89.20041536863967}, {"type": "dot_recall", "value": 85.9}, {"type": "euclidean_accuracy", "value": 99.81485148514851}, {"type": "euclidean_ap", "value": 95.4752113136905}, {"type": "euclidean_f1", "value": 90.44334975369456}, {"type": "euclidean_precision", "value": 89.126213592233}, {"type": "euclidean_recall", "value": 91.8}, {"type": "manhattan_accuracy", "value": 99.81584158415842}, {"type": "manhattan_ap", "value": 95.5163172682464}, {"type": "manhattan_f1", "value": 90.51987767584097}, {"type": "manhattan_precision", "value": 92.3076923076923}, {"type": "manhattan_recall", "value": 88.8}, {"type": "max_accuracy", "value": 99.81584158415842}, {"type": "max_ap", "value": 95.5743678558349}, {"type": "max_f1", "value": 90.51987767584097}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 62.63235986949449}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 36.334795589585575}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 52.02955214518782}, {"type": "mrr", "value": 52.8004838298956}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.63769566275453}, {"type": "cos_sim_spearman", "value": 30.422379185989335}, {"type": "dot_pearson", "value": 26.88493071882256}, {"type": "dot_spearman", "value": 26.505249740971305}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.21}, {"type": "map_at_10", "value": 1.654}, {"type": "map_at_100", "value": 10.095}, {"type": "map_at_1000", "value": 25.808999999999997}, {"type": "map_at_3", "value": 0.594}, {"type": "map_at_5", "value": 0.9289999999999999}, {"type": "mrr_at_1", "value": 78.0}, {"type": "mrr_at_10", "value": 87.019}, {"type": "mrr_at_100", "value": 87.019}, {"type": "mrr_at_1000", "value": 87.019}, {"type": "mrr_at_3", "value": 86.333}, {"type": "mrr_at_5", "value": 86.733}, {"type": "ndcg_at_1", "value": 73.0}, {"type": "ndcg_at_10", "value": 66.52900000000001}, {"type": "ndcg_at_100", "value": 53.433}, {"type": "ndcg_at_1000", "value": 51.324000000000005}, {"type": "ndcg_at_3", "value": 72.02199999999999}, {"type": "ndcg_at_5", "value": 69.696}, {"type": "precision_at_1", "value": 78.0}, {"type": "precision_at_10", "value": 70.39999999999999}, {"type": "precision_at_100", "value": 55.46}, {"type": "precision_at_1000", "value": 22.758}, {"type": "precision_at_3", "value": 76.667}, {"type": "precision_at_5", "value": 74.0}, {"type": "recall_at_1", "value": 0.21}, {"type": "recall_at_10", "value": 1.8849999999999998}, {"type": "recall_at_100", "value": 13.801}, {"type": "recall_at_1000", "value": 49.649}, {"type": "recall_at_3", "value": 0.632}, {"type": "recall_at_5", "value": 1.009}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 1.797}, {"type": "map_at_10", "value": 9.01}, {"type": "map_at_100", "value": 14.682}, {"type": "map_at_1000", "value": 16.336000000000002}, {"type": "map_at_3", "value": 4.546}, {"type": "map_at_5", "value": 5.9270000000000005}, {"type": "mrr_at_1", "value": 24.490000000000002}, {"type": "mrr_at_10", "value": 41.156}, {"type": "mrr_at_100", "value": 42.392}, {"type": "mrr_at_1000", "value": 42.408}, {"type": "mrr_at_3", "value": 38.775999999999996}, {"type": "mrr_at_5", "value": 40.102}, {"type": "ndcg_at_1", "value": 21.429000000000002}, {"type": "ndcg_at_10", "value": 22.222}, {"type": "ndcg_at_100", "value": 34.405}, {"type": "ndcg_at_1000", "value": 46.599000000000004}, {"type": "ndcg_at_3", "value": 25.261}, {"type": "ndcg_at_5", "value": 22.695999999999998}, {"type": "precision_at_1", "value": 24.490000000000002}, {"type": "precision_at_10", "value": 19.796}, {"type": "precision_at_100", "value": 7.306}, {"type": "precision_at_1000", "value": 1.5350000000000001}, {"type": "precision_at_3", "value": 27.211000000000002}, {"type": "precision_at_5", "value": 22.857}, {"type": "recall_at_1", "value": 1.797}, {"type": "recall_at_10", "value": 15.706000000000001}, {"type": "recall_at_100", "value": 46.412}, {"type": "recall_at_1000", "value": 83.159}, {"type": "recall_at_3", "value": 6.1370000000000005}, {"type": "recall_at_5", "value": 8.599}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 70.3302}, {"type": "ap", "value": 14.169121204575601}, {"type": "f1", "value": 54.229345975274235}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 58.22297679683077}, {"type": "f1", "value": 58.62984908377875}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 49.952922428464255}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 84.68140907194373}, {"type": "cos_sim_ap", "value": 70.12180123666836}, {"type": "cos_sim_f1", "value": 65.77501791258658}, {"type": "cos_sim_precision", "value": 60.07853403141361}, {"type": "cos_sim_recall", "value": 72.66490765171504}, {"type": "dot_accuracy", "value": 81.92167848840674}, {"type": "dot_ap", "value": 60.49837581423469}, {"type": "dot_f1", "value": 58.44186046511628}, {"type": "dot_precision", "value": 52.24532224532224}, {"type": "dot_recall", "value": 66.3060686015831}, {"type": "euclidean_accuracy", "value": 84.73505394289802}, {"type": "euclidean_ap", "value": 70.3278904593286}, {"type": "euclidean_f1", "value": 65.98851124940161}, {"type": "euclidean_precision", "value": 60.38107752956636}, {"type": "euclidean_recall", "value": 72.74406332453826}, {"type": "manhattan_accuracy", "value": 84.73505394289802}, {"type": "manhattan_ap", "value": 70.00737738537337}, {"type": "manhattan_f1", "value": 65.80150784822642}, {"type": "manhattan_precision", "value": 61.892583120204606}, {"type": "manhattan_recall", "value": 70.23746701846966}, {"type": "max_accuracy", "value": 84.73505394289802}, {"type": "max_ap", "value": 70.3278904593286}, {"type": "max_f1", "value": 65.98851124940161}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.44258159661582}, {"type": "cos_sim_ap", "value": 84.91926704880888}, {"type": "cos_sim_f1", "value": 77.07651086632926}, {"type": "cos_sim_precision", "value": 74.5894554883319}, {"type": "cos_sim_recall", "value": 79.73514012935017}, {"type": "dot_accuracy", "value": 85.88116583226608}, {"type": "dot_ap", "value": 78.9753854779923}, {"type": "dot_f1", "value": 72.17757637979255}, {"type": "dot_precision", "value": 66.80647486729143}, {"type": "dot_recall", "value": 78.48783492454572}, {"type": "euclidean_accuracy", "value": 88.5299025885823}, {"type": "euclidean_ap", "value": 85.08006075642194}, {"type": "euclidean_f1", "value": 77.29637336504163}, {"type": "euclidean_precision", "value": 74.69836253950014}, {"type": "euclidean_recall", "value": 80.08161379735141}, {"type": "manhattan_accuracy", "value": 88.55124771995187}, {"type": "manhattan_ap", "value": 85.00941529932851}, {"type": "manhattan_f1", "value": 77.33100233100232}, {"type": "manhattan_precision", "value": 73.37572573956317}, {"type": "manhattan_recall", "value": 81.73698798891284}, {"type": "max_accuracy", "value": 88.55124771995187}, {"type": "max_ap", "value": 85.08006075642194}, {"type": "max_f1", "value": 77.33100233100232}]}]}]}
dataset
null
539
ntc-ai/SDXL-LoRA-slider.dark-elfdrow
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2023-12-21T04:39:03Z
2023-12-21T04:39:07+00:00
3
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/dark elf,drow...light elf/dark elf,drow_17_3.0.png widget: - text: dark elf,drow output: url: images/dark elf,drow_17_3.0.png - text: dark elf,drow output: url: images/dark elf,drow_19_3.0.png - text: dark elf,drow output: url: images/dark elf,drow_20_3.0.png - text: dark elf,drow output: url: images/dark elf,drow_21_3.0.png - text: dark elf,drow output: url: images/dark elf,drow_22_3.0.png inference: false instance_prompt: dark elf,drow --- # ntcai.xyz slider - dark elf,drow (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/dark elf,drow_17_-3.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_17_0.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_17_3.0.png" width=256 height=256 /> | | <img src="images/dark elf,drow_19_-3.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_19_0.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_19_3.0.png" width=256 height=256 /> | | <img src="images/dark elf,drow_20_-3.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_20_0.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` dark elf,drow ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.dark-elfdrow', weight_name='dark elf,drow.safetensors', adapter_name="dark elf,drow") # Activate the LoRA pipe.set_adapters(["dark elf,drow"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, dark elf,drow" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 520+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
Non_BioNLP
# ntcai.xyz slider - dark elf,drow (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/dark elf,drow_17_-3.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_17_0.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_17_3.0.png" width=256 height=256 /> | | <img src="images/dark elf,drow_19_-3.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_19_0.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_19_3.0.png" width=256 height=256 /> | | <img src="images/dark elf,drow_20_-3.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_20_0.0.png" width=256 height=256 /> | <img src="images/dark elf,drow_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` dark elf,drow ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.dark-elfdrow', weight_name='dark elf,drow.safetensors', adapter_name="dark elf,drow") # Activate the LoRA pipe.set_adapters(["dark elf,drow"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, dark elf,drow" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 520+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
{"base_model": "stabilityai/stable-diffusion-xl-base-1.0", "language": ["en"], "license": "mit", "tags": ["text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "diffusers"], "thumbnail": "images/evaluate/dark elf,drow...light elf/dark elf,drow_17_3.0.png", "widget": [{"text": "dark elf,drow", "output": {"url": "images/dark elf,drow_17_3.0.png"}}, {"text": "dark elf,drow", "output": {"url": "images/dark elf,drow_19_3.0.png"}}, {"text": "dark elf,drow", "output": {"url": "images/dark elf,drow_20_3.0.png"}}, {"text": "dark elf,drow", "output": {"url": "images/dark elf,drow_21_3.0.png"}}, {"text": "dark elf,drow", "output": {"url": "images/dark elf,drow_22_3.0.png"}}], "inference": false, "instance_prompt": "dark elf,drow"}
dataset
null
540
TensorStack/MidgardPony-XL-onnx
TensorStack
text-to-image
[ "onnx", "text-to-image", "region:us" ]
2024-06-17T02:28:28Z
2024-06-17T02:38:02+00:00
0
1
--- pipeline_tag: text-to-image --- # Midgard Pony v3 - Onnx Olive DirectML Optimized ## Original Model https://civitai.com/models/470287?modelVersionId=561310 ## C# Inference Demo https://github.com/TensorStack-AI/OnnxStack ```csharp // Create Pipeline var pipeline = StableDiffusionXLPipeline.CreatePipeline("D:\\Models\\MidgardPony-XL"); // Prompt var promptOptions = new PromptOptions { Prompt = "Craft an image of a gallant furry prince, with a charming smile and a sword at his side, ready to embark on a quest" }; // Run pipeline var result = await pipeline.GenerateImageAsync(promptOptions); // Save Image Result await result.SaveAsync("Result.png"); ``` ## Inference Result ![Intro Image](Sample.png)
[ "CRAFT" ]
Non_BioNLP
# Midgard Pony v3 - Onnx Olive DirectML Optimized ## Original Model https://civitai.com/models/470287?modelVersionId=561310 ## C# Inference Demo https://github.com/TensorStack-AI/OnnxStack ```csharp // Create Pipeline var pipeline = StableDiffusionXLPipeline.CreatePipeline("D:\\Models\\MidgardPony-XL"); // Prompt var promptOptions = new PromptOptions { Prompt = "Craft an image of a gallant furry prince, with a charming smile and a sword at his side, ready to embark on a quest" }; // Run pipeline var result = await pipeline.GenerateImageAsync(promptOptions); // Save Image Result await result.SaveAsync("Result.png"); ``` ## Inference Result ![Intro Image](Sample.png)
{"pipeline_tag": "text-to-image"}
dataset
null
541
ntc-ai/SDXL-LoRA-slider.on-the-phone
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2023-12-18T10:34:09Z
2024-02-06T00:35:05+00:00
19
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/on the phone_17_3.0.png widget: - text: on the phone output: url: images/on the phone_17_3.0.png - text: on the phone output: url: images/on the phone_19_3.0.png - text: on the phone output: url: images/on the phone_20_3.0.png - text: on the phone output: url: images/on the phone_21_3.0.png - text: on the phone output: url: images/on the phone_22_3.0.png inference: false instance_prompt: on the phone --- # ntcai.xyz slider - on the phone (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/on the phone_17_-3.0.png" width=256 height=256 /> | <img src="images/on the phone_17_0.0.png" width=256 height=256 /> | <img src="images/on the phone_17_3.0.png" width=256 height=256 /> | | <img src="images/on the phone_19_-3.0.png" width=256 height=256 /> | <img src="images/on the phone_19_0.0.png" width=256 height=256 /> | <img src="images/on the phone_19_3.0.png" width=256 height=256 /> | | <img src="images/on the phone_20_-3.0.png" width=256 height=256 /> | <img src="images/on the phone_20_0.0.png" width=256 height=256 /> | <img src="images/on the phone_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/0b7d7e51-46f9-487e-9b8b-e2f4b91a2d1a](https://sliders.ntcai.xyz/sliders/app/loras/0b7d7e51-46f9-487e-9b8b-e2f4b91a2d1a) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` on the phone ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.on-the-phone', weight_name='on the phone.safetensors', adapter_name="on the phone") # Activate the LoRA pipe.set_adapters(["on the phone"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, on the phone" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
Non_BioNLP
# ntcai.xyz slider - on the phone (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/on the phone_17_-3.0.png" width=256 height=256 /> | <img src="images/on the phone_17_0.0.png" width=256 height=256 /> | <img src="images/on the phone_17_3.0.png" width=256 height=256 /> | | <img src="images/on the phone_19_-3.0.png" width=256 height=256 /> | <img src="images/on the phone_19_0.0.png" width=256 height=256 /> | <img src="images/on the phone_19_3.0.png" width=256 height=256 /> | | <img src="images/on the phone_20_-3.0.png" width=256 height=256 /> | <img src="images/on the phone_20_0.0.png" width=256 height=256 /> | <img src="images/on the phone_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/0b7d7e51-46f9-487e-9b8b-e2f4b91a2d1a](https://sliders.ntcai.xyz/sliders/app/loras/0b7d7e51-46f9-487e-9b8b-e2f4b91a2d1a) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` on the phone ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.on-the-phone', weight_name='on the phone.safetensors', adapter_name="on the phone") # Activate the LoRA pipe.set_adapters(["on the phone"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, on the phone" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
{"base_model": "stabilityai/stable-diffusion-xl-base-1.0", "language": ["en"], "license": "mit", "tags": ["text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "diffusers"], "thumbnail": "images/on the phone_17_3.0.png", "widget": [{"text": "on the phone", "output": {"url": "images/on the phone_17_3.0.png"}}, {"text": "on the phone", "output": {"url": "images/on the phone_19_3.0.png"}}, {"text": "on the phone", "output": {"url": "images/on the phone_20_3.0.png"}}, {"text": "on the phone", "output": {"url": "images/on the phone_21_3.0.png"}}, {"text": "on the phone", "output": {"url": "images/on the phone_22_3.0.png"}}], "inference": false, "instance_prompt": "on the phone"}
dataset
null
542
ostapeno/ng_1B_lora_sim_replace_distinct10
ostapeno
null
[ "region:us" ]
2023-12-25T02:24:26Z
2023-12-25T02:26:16+00:00
0
0
--- {} --- Number of experts present in the library: 20 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | aeslc_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/aeslc_1_0_0 | lora | | social_i_qa_Generate_the_question_from_the_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora | | ropes_background_new_situation_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora | | wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora | | ropes_background_situation_middle | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_situation_middle | lora | | ropes_prompt_beginning | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | wiki_hop_original_generate_subject | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora | | niv2_explanation | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_explanation | lora | | sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/sciq_Multiple_Choice | lora | | niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora | | wiki_hop_original_generate_object | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | social_i_qa_Check_if_a_random_answer_is_valid_or_not | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora | | ultrachat_25 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ultrachat_25 | lora | | ropes_new_situation_background_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora | | quarel_heres_a_story | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/quarel_heres_a_story | lora | | super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora | | duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora | | ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_read_background_situation | lora | | high_school_psychology | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/high_school_psychology | lora | | ropes_plain_bottom_hint | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | Last updated on: 2023-12-25 02:26:15+00:00
[ "SCIQ" ]
Non_BioNLP
Number of experts present in the library: 20 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | aeslc_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/aeslc_1_0_0 | lora | | social_i_qa_Generate_the_question_from_the_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora | | ropes_background_new_situation_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora | | wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora | | ropes_background_situation_middle | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_situation_middle | lora | | ropes_prompt_beginning | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | wiki_hop_original_generate_subject | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora | | niv2_explanation | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_explanation | lora | | sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/sciq_Multiple_Choice | lora | | niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora | | wiki_hop_original_generate_object | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | social_i_qa_Check_if_a_random_answer_is_valid_or_not | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora | | ultrachat_25 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ultrachat_25 | lora | | ropes_new_situation_background_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora | | quarel_heres_a_story | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/quarel_heres_a_story | lora | | super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora | | duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora | | ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_read_background_situation | lora | | high_school_psychology | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/high_school_psychology | lora | | ropes_plain_bottom_hint | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | Last updated on: 2023-12-25 02:26:15+00:00
{}
dataset
null
543
Locutusque/gpt2-xl-conversational
Locutusque
text-generation
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "en", "dataset:Locutusque/InstructMix", "doi:10.57967/hf/1371", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-08-21T04:43:31Z
2023-11-21T17:17:46+00:00
2,140
18
--- datasets: - Locutusque/InstructMix language: - en license: mit metrics: - bleu - perplexity - loss - accuracy pipeline_tag: text-generation widget: - text: '<|USER|> Design a Neo4j database and Cypher function snippet to Display Extreme Dental hygiene: Using Mouthwash for Analysis for Beginners. Implement if/else or switch/case statements to handle different conditions related to the Consent. Provide detailed comments explaining your control flow and the reasoning behind each decision. <|ASSISTANT|> ' - text: '<|USER|> Write me a story about a magical place. <|ASSISTANT|> ' - text: '<|USER|> Write me an essay about the life of George Washington <|ASSISTANT|> ' - text: '<|USER|> Solve the following equation 2x + 10 = 20 <|ASSISTANT|> ' - text: '<|USER|> Craft me a list of some nice places to visit around the world. <|ASSISTANT|> ' - text: '<|USER|> How to manage a lazy employee: Address the employee verbally. Don''t allow an employee''s laziness or lack of enthusiasm to become a recurring issue. Tell the employee you''re hoping to speak with them about workplace expectations and performance, and schedule a time to sit down together. Question: To manage a lazy employee, it is suggested to talk to the employee. True, False, or Neither? <|ASSISTANT|> ' inference: parameters: temperature: 0.8 do_sample: true top_p: 0.14 top_k: 41 max_new_tokens: 250 repetition_penalty: 1.176 --- # Model Card ## Model Details - Model Name: gpt2-xl-conversational - Model Type: Language Modeling - Task: Generating Conversational Responses - Hardware: 1x Nvidia Titan V - Description: This model is trained on a dataset of conversations between a user and an AI assistant, with the goal of generating a coherent and relevant response to the user's input. It uses the GPT-2 architecture, a state-of-the-art transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The model is fine-tuned on the conversational data using maximum likelihood estimation, and is evaluated based on its ability to generate responses that are both grammatically correct and semantically relevant to the user's input. ## Intended Use This model is intended to be used for generating conversational responses in a variety of contexts, such as chatbots, virtual assistants, and customer service applications. It is designed to provide natural and engaging responses to user input, with a focus on maintaining a consistent tone and style throughout the conversation. The model is suitable for use in both text-based and voice-based interfaces, and can be easily integrated into existing applications using the PyTorch and Transformers frameworks. ## Training Data The model is trained on a large dataset of conversational data, consisting of interactions between users and an AI assistant. The data is preprocessed to remove any sensitive information and is formatted in a way that is suitable for training a language model. The training data is split into a training set and a validation set, with the training set used to update the model parameters and the validation set used to evaluate the model performance. The model was trained on 300,000 examples and achieved excellent metrics. ## Model Architecture The model architecture used in this model is GPT-2, a transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The GPT-2 architecture consists of a multi-layered decoder-only transformer, with self-attention mechanisms that allow the model to capture long-term dependencies and generate coherent text. ## Evaluation Metrics The model is evaluated based on several metrics, including loss, reward, penalty, BLEU score, and perplexity. The loss metric is calculated during training and reflects the difference between the predicted output and the actual output. The reward metric is based on the number of correct words generated by the model, while the penalty metric penalizes the model for repeating words consecutively. The BLEU score measures the similarity between the generated text and the ground truth text, while the perplexity metric measures how well the model is able to predict the next word in a sequence. During training, the model achieved the following metrics: - BLEU score: 52 - Accuracy: 53 - perplexity: 4.3 Evaluation metrics: | Task |Version|Metric|Value| |Stderr| |--------|------:|------|----:|---|-----:| |pubmedqa| 0|acc |0.536|± |0.0223 |arc_challenge| 0|acc_norm |0.2867|± |0.0132| |arc_easy | 0|acc |0.5804|± |0.0101| |arc_easy | 0|acc_norm|0.5707|±|0.0102| |winogrande| 0|acc |0.5691|± |0.0139| |truthfulqa_mc| 1|mc2 |0.3918|± |0.0144| |anli_r1| 0|acc |0.338|± |0.0150| |anli_r2| 0|acc |0.346|± |0.0151| |anli_r3| 0|acc |0.355|± |0.0138| |drop| 1|f1 |0.0034|± |0.0004| |hendrycksTest-abstract_algebra | 1|acc | 0.32|± |0.0952| |hendrycksTest-anatomy | 1|acc | 0.44|± |0.1013| |hendrycksTest-astronomy | 1|acc | 0.24|± |0.0872| |hendrycksTest-business_ethics | 1|acc | 0.24|± |0.0872| |hendrycksTest-clinical_knowledge | 1|acc | 0.24|± |0.0872| |hendrycksTest-college_biology | 1|acc | 0.20|± |0.0816| |hendrycksTest-college_chemistry | 1|acc | 0.40|± |0.1000| |hendrycksTest-college_computer_science | 1|acc | 0.36|± |0.0980| |hendrycksTest-college_mathematics | 1|acc | 0.48|± |0.1020| |hendrycksTest-college_medicine | 1|acc | 0.20|± |0.0816| |hendrycksTest-college_physics | 1|acc | 0.44|± |0.1013| |hendrycksTest-computer_security | 1|acc | 0.16|± |0.0748| |hendrycksTest-conceptual_physics | 1|acc | 0.12|± |0.0663| |hendrycksTest-econometrics | 1|acc | 0.16|± |0.0748| |hendrycksTest-electrical_engineering | 1|acc | 0.28|± |0.0917| |hendrycksTest-elementary_mathematics | 1|acc | 0.36|± |0.0980| |hendrycksTest-formal_logic | 1|acc | 0.44|± |0.1013| |hendrycksTest-global_facts | 1|acc | 0.20|± |0.0816| |hendrycksTest-high_school_biology | 1|acc | 0.20|± |0.0816| |hendrycksTest-high_school_chemistry | 1|acc | 0.28|± |0.0917| |hendrycksTest-high_school_computer_science | 1|acc | 0.24|± |0.0872| |hendrycksTest-high_school_european_history | 1|acc | 0.32|± |0.0952| |hendrycksTest-high_school_geography | 1|acc | 0.32|± |0.0952| |hendrycksTest-high_school_government_and_politics| 1|acc | 0.28|± |0.0917| |hendrycksTest-high_school_macroeconomics | 1|acc | 0.28|± |0.0917| |hendrycksTest-high_school_mathematics | 1|acc | 0.20|± |0.0816| |hendrycksTest-high_school_microeconomics | 1|acc | 0.24|± |0.0872| |hendrycksTest-high_school_physics | 1|acc | 0.28|± |0.0917| |hendrycksTest-high_school_psychology | 1|acc | 0.32|± |0.0952| |hendrycksTest-high_school_statistics | 1|acc | 0.40|± |0.1000| |hendrycksTest-high_school_us_history | 1|acc | 0.32|± |0.0952| |hendrycksTest-high_school_world_history | 1|acc | 0.36|± |0.0980|| |hendrycksTest-human_aging | 1|acc | 0.16|± |0.0748| |hendrycksTest-human_sexuality | 1|acc | 0.40|± |0.1000| |hendrycksTest-international_law | 1|acc | 0.24|± |0.0872| |hendrycksTest-jurisprudence | 1|acc | 0.08|± |0.0554| |hendrycksTest-logical_fallacies | 1|acc | 0.52|± |0.1020| |hendrycksTest-machine_learning | 1|acc | 0.12|± |0.0663| |hendrycksTest-management | 1|acc | 0.12|± |0.0663| |hendrycksTest-marketing | 1|acc | 0.16|± |0.0748| |hendrycksTest-medical_genetics | 1|acc | 0.12|± |0.0663| |hendrycksTest-miscellaneous | 1|acc | 0.36|± |0.0980| |hendrycksTest-moral_disputes | 1|acc | 0.08|± |0.0554| |hendrycksTest-moral_scenarios | 1|acc | 0.44|± |0.1013| |hendrycksTest-nutrition | 1|acc | 0.32|± |0.0952| |hendrycksTest-philosophy | 1|acc | 0.44|± |0.1013| |hendrycksTest-prehistory | 1|acc | 0.16|± |0.0748| |hendrycksTest-professional_accounting | 1|acc | 0.28|± |0.0917| |hendrycksTest-professional_law | 1|acc | 0.12|± |0.0663| |hendrycksTest-professional_medicine | 1|acc | 0.40|± |0.1000| |hendrycksTest-professional_psychology | 1|acc | 0.24|± |0.0872| |hendrycksTest-public_relations | 1|acc | 0.08|± |0.0554| |hendrycksTest-security_studies | 1|acc | 0.24|± |0.0872| |hendrycksTest-sociology | 1|acc | 0.28|± |0.0917| |hendrycksTest-us_foreign_policy | 1|acc | 0.24|± |0.0872| |hendrycksTest-virology | 1|acc | 0.20|± |0.0816| |hendrycksTest-world_religions | 1|acc | 0.16|± |0.0748| ## Limitations and Bias This model is not suitable for all use cases due to its limited training time on a weak computer. As a result, it may produce irrelevant or nonsensical responses. For optimal performance, I recommend using a GPU with at least 16 GB of VRAM and downloading the model manually instead of using the Transformers library. Here's how you should deploy the model: ```python import torch from transformers import GPT2LMHeadModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Locutusque/gpt2-xl-conversational") model = GPT2LMHeadModel.from_pretrained("Locutusque/gpt2-xl-conversational", torch_dtype=torch.float16) model.resize_token_embeddings(len(tokenizer)) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device, dtype=torch.float32) def generate_text(model: SENTIAForCausalLM, tokenizer, prompt, max_length=256): prompt = f'<|USER|> {prompt} <|ASSISTANT|> ' input_ids = tokenizer.encode(prompt, add_special_tokens=True, max_length=max_length, truncation=True, return_tensors="pt").to(device) output = model.generate(input_ids, do_sample=True, temperature=0.3, top_p=0.7, top_k=23, repetition_penalty=1.176, max_length=max_length, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id) output_ids = tokenizer.decode(output[0], skip_special_tokens=False) return output_ids # Loop to interact with the model while True: prompt = input("Enter a prompt (or 'q' to quit): ") if prompt == "q": break output_text = generate_text(model, tokenizer, prompt, max_length=1022) print(output_text) ``` ## Deploying and training the model The model has been fine-tuned on a specific input format that goes like this ```"<|USER|> {user prompt} <|ASSISTANT|> {model prediction} ".```
[ "CRAFT", "PUBMEDQA" ]
Non_BioNLP
# Model Card ## Model Details - Model Name: gpt2-xl-conversational - Model Type: Language Modeling - Task: Generating Conversational Responses - Hardware: 1x Nvidia Titan V - Description: This model is trained on a dataset of conversations between a user and an AI assistant, with the goal of generating a coherent and relevant response to the user's input. It uses the GPT-2 architecture, a state-of-the-art transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The model is fine-tuned on the conversational data using maximum likelihood estimation, and is evaluated based on its ability to generate responses that are both grammatically correct and semantically relevant to the user's input. ## Intended Use This model is intended to be used for generating conversational responses in a variety of contexts, such as chatbots, virtual assistants, and customer service applications. It is designed to provide natural and engaging responses to user input, with a focus on maintaining a consistent tone and style throughout the conversation. The model is suitable for use in both text-based and voice-based interfaces, and can be easily integrated into existing applications using the PyTorch and Transformers frameworks. ## Training Data The model is trained on a large dataset of conversational data, consisting of interactions between users and an AI assistant. The data is preprocessed to remove any sensitive information and is formatted in a way that is suitable for training a language model. The training data is split into a training set and a validation set, with the training set used to update the model parameters and the validation set used to evaluate the model performance. The model was trained on 300,000 examples and achieved excellent metrics. ## Model Architecture The model architecture used in this model is GPT-2, a transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The GPT-2 architecture consists of a multi-layered decoder-only transformer, with self-attention mechanisms that allow the model to capture long-term dependencies and generate coherent text. ## Evaluation Metrics The model is evaluated based on several metrics, including loss, reward, penalty, BLEU score, and perplexity. The loss metric is calculated during training and reflects the difference between the predicted output and the actual output. The reward metric is based on the number of correct words generated by the model, while the penalty metric penalizes the model for repeating words consecutively. The BLEU score measures the similarity between the generated text and the ground truth text, while the perplexity metric measures how well the model is able to predict the next word in a sequence. During training, the model achieved the following metrics: - BLEU score: 52 - Accuracy: 53 - perplexity: 4.3 Evaluation metrics: | Task |Version|Metric|Value| |Stderr| |--------|------:|------|----:|---|-----:| |pubmedqa| 0|acc |0.536|± |0.0223 |arc_challenge| 0|acc_norm |0.2867|± |0.0132| |arc_easy | 0|acc |0.5804|± |0.0101| |arc_easy | 0|acc_norm|0.5707|±|0.0102| |winogrande| 0|acc |0.5691|± |0.0139| |truthfulqa_mc| 1|mc2 |0.3918|± |0.0144| |anli_r1| 0|acc |0.338|± |0.0150| |anli_r2| 0|acc |0.346|± |0.0151| |anli_r3| 0|acc |0.355|± |0.0138| |drop| 1|f1 |0.0034|± |0.0004| |hendrycksTest-abstract_algebra | 1|acc | 0.32|± |0.0952| |hendrycksTest-anatomy | 1|acc | 0.44|± |0.1013| |hendrycksTest-astronomy | 1|acc | 0.24|± |0.0872| |hendrycksTest-business_ethics | 1|acc | 0.24|± |0.0872| |hendrycksTest-clinical_knowledge | 1|acc | 0.24|± |0.0872| |hendrycksTest-college_biology | 1|acc | 0.20|± |0.0816| |hendrycksTest-college_chemistry | 1|acc | 0.40|± |0.1000| |hendrycksTest-college_computer_science | 1|acc | 0.36|± |0.0980| |hendrycksTest-college_mathematics | 1|acc | 0.48|± |0.1020| |hendrycksTest-college_medicine | 1|acc | 0.20|± |0.0816| |hendrycksTest-college_physics | 1|acc | 0.44|± |0.1013| |hendrycksTest-computer_security | 1|acc | 0.16|± |0.0748| |hendrycksTest-conceptual_physics | 1|acc | 0.12|± |0.0663| |hendrycksTest-econometrics | 1|acc | 0.16|± |0.0748| |hendrycksTest-electrical_engineering | 1|acc | 0.28|± |0.0917| |hendrycksTest-elementary_mathematics | 1|acc | 0.36|± |0.0980| |hendrycksTest-formal_logic | 1|acc | 0.44|± |0.1013| |hendrycksTest-global_facts | 1|acc | 0.20|± |0.0816| |hendrycksTest-high_school_biology | 1|acc | 0.20|± |0.0816| |hendrycksTest-high_school_chemistry | 1|acc | 0.28|± |0.0917| |hendrycksTest-high_school_computer_science | 1|acc | 0.24|± |0.0872| |hendrycksTest-high_school_european_history | 1|acc | 0.32|± |0.0952| |hendrycksTest-high_school_geography | 1|acc | 0.32|± |0.0952| |hendrycksTest-high_school_government_and_politics| 1|acc | 0.28|± |0.0917| |hendrycksTest-high_school_macroeconomics | 1|acc | 0.28|± |0.0917| |hendrycksTest-high_school_mathematics | 1|acc | 0.20|± |0.0816| |hendrycksTest-high_school_microeconomics | 1|acc | 0.24|± |0.0872| |hendrycksTest-high_school_physics | 1|acc | 0.28|± |0.0917| |hendrycksTest-high_school_psychology | 1|acc | 0.32|± |0.0952| |hendrycksTest-high_school_statistics | 1|acc | 0.40|± |0.1000| |hendrycksTest-high_school_us_history | 1|acc | 0.32|± |0.0952| |hendrycksTest-high_school_world_history | 1|acc | 0.36|± |0.0980|| |hendrycksTest-human_aging | 1|acc | 0.16|± |0.0748| |hendrycksTest-human_sexuality | 1|acc | 0.40|± |0.1000| |hendrycksTest-international_law | 1|acc | 0.24|± |0.0872| |hendrycksTest-jurisprudence | 1|acc | 0.08|± |0.0554| |hendrycksTest-logical_fallacies | 1|acc | 0.52|± |0.1020| |hendrycksTest-machine_learning | 1|acc | 0.12|± |0.0663| |hendrycksTest-management | 1|acc | 0.12|± |0.0663| |hendrycksTest-marketing | 1|acc | 0.16|± |0.0748| |hendrycksTest-medical_genetics | 1|acc | 0.12|± |0.0663| |hendrycksTest-miscellaneous | 1|acc | 0.36|± |0.0980| |hendrycksTest-moral_disputes | 1|acc | 0.08|± |0.0554| |hendrycksTest-moral_scenarios | 1|acc | 0.44|± |0.1013| |hendrycksTest-nutrition | 1|acc | 0.32|± |0.0952| |hendrycksTest-philosophy | 1|acc | 0.44|± |0.1013| |hendrycksTest-prehistory | 1|acc | 0.16|± |0.0748| |hendrycksTest-professional_accounting | 1|acc | 0.28|± |0.0917| |hendrycksTest-professional_law | 1|acc | 0.12|± |0.0663| |hendrycksTest-professional_medicine | 1|acc | 0.40|± |0.1000| |hendrycksTest-professional_psychology | 1|acc | 0.24|± |0.0872| |hendrycksTest-public_relations | 1|acc | 0.08|± |0.0554| |hendrycksTest-security_studies | 1|acc | 0.24|± |0.0872| |hendrycksTest-sociology | 1|acc | 0.28|± |0.0917| |hendrycksTest-us_foreign_policy | 1|acc | 0.24|± |0.0872| |hendrycksTest-virology | 1|acc | 0.20|± |0.0816| |hendrycksTest-world_religions | 1|acc | 0.16|± |0.0748| ## Limitations and Bias This model is not suitable for all use cases due to its limited training time on a weak computer. As a result, it may produce irrelevant or nonsensical responses. For optimal performance, I recommend using a GPU with at least 16 GB of VRAM and downloading the model manually instead of using the Transformers library. Here's how you should deploy the model: ```python import torch from transformers import GPT2LMHeadModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Locutusque/gpt2-xl-conversational") model = GPT2LMHeadModel.from_pretrained("Locutusque/gpt2-xl-conversational", torch_dtype=torch.float16) model.resize_token_embeddings(len(tokenizer)) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device, dtype=torch.float32) def generate_text(model: SENTIAForCausalLM, tokenizer, prompt, max_length=256): prompt = f'<|USER|> {prompt} <|ASSISTANT|> ' input_ids = tokenizer.encode(prompt, add_special_tokens=True, max_length=max_length, truncation=True, return_tensors="pt").to(device) output = model.generate(input_ids, do_sample=True, temperature=0.3, top_p=0.7, top_k=23, repetition_penalty=1.176, max_length=max_length, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id) output_ids = tokenizer.decode(output[0], skip_special_tokens=False) return output_ids # Loop to interact with the model while True: prompt = input("Enter a prompt (or 'q' to quit): ") if prompt == "q": break output_text = generate_text(model, tokenizer, prompt, max_length=1022) print(output_text) ``` ## Deploying and training the model The model has been fine-tuned on a specific input format that goes like this ```"<|USER|> {user prompt} <|ASSISTANT|> {model prediction} ".```
{"datasets": ["Locutusque/InstructMix"], "language": ["en"], "license": "mit", "metrics": ["bleu", "perplexity", "loss", "accuracy"], "pipeline_tag": "text-generation", "widget": [{"text": "<|USER|> Design a Neo4j database and Cypher function snippet to Display Extreme Dental hygiene: Using Mouthwash for Analysis for Beginners. Implement if/else or switch/case statements to handle different conditions related to the Consent. Provide detailed comments explaining your control flow and the reasoning behind each decision. <|ASSISTANT|> "}, {"text": "<|USER|> Write me a story about a magical place. <|ASSISTANT|> "}, {"text": "<|USER|> Write me an essay about the life of George Washington <|ASSISTANT|> "}, {"text": "<|USER|> Solve the following equation 2x + 10 = 20 <|ASSISTANT|> "}, {"text": "<|USER|> Craft me a list of some nice places to visit around the world. <|ASSISTANT|> "}, {"text": "<|USER|> How to manage a lazy employee: Address the employee verbally. Don't allow an employee's laziness or lack of enthusiasm to become a recurring issue. Tell the employee you're hoping to speak with them about workplace expectations and performance, and schedule a time to sit down together. Question: To manage a lazy employee, it is suggested to talk to the employee. True, False, or Neither? <|ASSISTANT|> "}], "inference": {"parameters": {"temperature": 0.8, "do_sample": true, "top_p": 0.14, "top_k": 41, "max_new_tokens": 250, "repetition_penalty": 1.176}}}
dataset
null
544
ImranzamanML/jina_embeddings-optimised_deen
ImranzamanML
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "fill-mask", "feature-extraction", "sentence-similarity", "mteb", "transformers", "transformers.js", "custom_code", "de", "en", "arxiv:2108.12409", "arxiv:2402.17016", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "region:us" ]
2025-02-25T20:13:34Z
2025-02-25T20:15:29+00:00
13
0
--- language: - de - en license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - transformers - transformers.js inference: false model-index: - name: jina-embeddings-v2-base-de results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.76119402985076 - type: ap value: 35.99577188521176 - type: f1 value: 67.50397431543269 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 68.9186295503212 - type: ap value: 79.73307115840507 - type: f1 value: 66.66245744831339 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 77.52215 - type: ap value: 71.85051037177416 - type: f1 value: 77.4171096157774 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.498 - type: f1 value: 38.058193386555956 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.717999999999996 - type: f1 value: 37.22674371574757 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 25.319999999999997 - type: map_at_10 value: 40.351 - type: map_at_100 value: 41.435 - type: map_at_1000 value: 41.443000000000005 - type: map_at_3 value: 35.266 - type: map_at_5 value: 37.99 - type: mrr_at_1 value: 25.746999999999996 - type: mrr_at_10 value: 40.515 - type: mrr_at_100 value: 41.606 - type: mrr_at_1000 value: 41.614000000000004 - type: mrr_at_3 value: 35.42 - type: mrr_at_5 value: 38.112 - type: ndcg_at_1 value: 25.319999999999997 - type: ndcg_at_10 value: 49.332 - type: ndcg_at_100 value: 53.909 - type: ndcg_at_1000 value: 54.089 - type: ndcg_at_3 value: 38.705 - type: ndcg_at_5 value: 43.606 - type: precision_at_1 value: 25.319999999999997 - type: precision_at_10 value: 7.831 - type: precision_at_100 value: 0.9820000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.24 - type: precision_at_5 value: 12.119 - type: recall_at_1 value: 25.319999999999997 - type: recall_at_10 value: 78.307 - type: recall_at_100 value: 98.222 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 48.72 - type: recall_at_5 value: 60.597 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 41.43100588255654 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.08988904593667 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.55514765595906 - type: mrr value: 73.51393835465858 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 79.6723823121172 - type: cos_sim_spearman value: 76.90596922214986 - type: euclidean_pearson value: 77.87910737957918 - type: euclidean_spearman value: 76.66319260598262 - type: manhattan_pearson value: 77.37039493457965 - type: manhattan_spearman value: 76.09872191280964 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.97703549060543 - type: f1 value: 98.86569241475296 - type: precision value: 98.81002087682673 - type: recall value: 98.97703549060543 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 83.93506493506493 - type: f1 value: 83.91014949949302 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 34.970675877585144 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 28.779230269190954 - task: type: Clustering dataset: name: MTEB BlurbsClusteringP2P type: slvnwhrl/blurbs-clustering-p2p config: default split: test revision: a2dd5b02a77de3466a3eaa98ae586b5610314496 metrics: - type: v_measure value: 35.490175601567216 - task: type: Clustering dataset: name: MTEB BlurbsClusteringS2S type: slvnwhrl/blurbs-clustering-s2s config: default split: test revision: 9bfff9a7f8f6dc6ffc9da71c48dd48b68696471d metrics: - type: v_measure value: 16.16638280560168 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.830999999999996 - type: map_at_10 value: 41.355 - type: map_at_100 value: 42.791000000000004 - type: map_at_1000 value: 42.918 - type: map_at_3 value: 38.237 - type: map_at_5 value: 40.066 - type: mrr_at_1 value: 38.484 - type: mrr_at_10 value: 47.593 - type: mrr_at_100 value: 48.388 - type: mrr_at_1000 value: 48.439 - type: mrr_at_3 value: 45.279 - type: mrr_at_5 value: 46.724 - type: ndcg_at_1 value: 38.484 - type: ndcg_at_10 value: 47.27 - type: ndcg_at_100 value: 52.568000000000005 - type: ndcg_at_1000 value: 54.729000000000006 - type: ndcg_at_3 value: 43.061 - type: ndcg_at_5 value: 45.083 - type: precision_at_1 value: 38.484 - type: precision_at_10 value: 8.927 - type: precision_at_100 value: 1.425 - type: precision_at_1000 value: 0.19 - type: precision_at_3 value: 20.791999999999998 - type: precision_at_5 value: 14.85 - type: recall_at_1 value: 30.830999999999996 - type: recall_at_10 value: 57.87799999999999 - type: recall_at_100 value: 80.124 - type: recall_at_1000 value: 94.208 - type: recall_at_3 value: 45.083 - type: recall_at_5 value: 51.154999999999994 - type: map_at_1 value: 25.782 - type: map_at_10 value: 34.492 - type: map_at_100 value: 35.521 - type: map_at_1000 value: 35.638 - type: map_at_3 value: 31.735999999999997 - type: map_at_5 value: 33.339 - type: mrr_at_1 value: 32.357 - type: mrr_at_10 value: 39.965 - type: mrr_at_100 value: 40.644000000000005 - type: mrr_at_1000 value: 40.695 - type: mrr_at_3 value: 37.739 - type: mrr_at_5 value: 39.061 - type: ndcg_at_1 value: 32.357 - type: ndcg_at_10 value: 39.644 - type: ndcg_at_100 value: 43.851 - type: ndcg_at_1000 value: 46.211999999999996 - type: ndcg_at_3 value: 35.675000000000004 - type: ndcg_at_5 value: 37.564 - type: precision_at_1 value: 32.357 - type: precision_at_10 value: 7.344 - type: precision_at_100 value: 1.201 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 17.155 - type: precision_at_5 value: 12.166 - type: recall_at_1 value: 25.782 - type: recall_at_10 value: 49.132999999999996 - type: recall_at_100 value: 67.24 - type: recall_at_1000 value: 83.045 - type: recall_at_3 value: 37.021 - type: recall_at_5 value: 42.548 - type: map_at_1 value: 35.778999999999996 - type: map_at_10 value: 47.038000000000004 - type: map_at_100 value: 48.064 - type: map_at_1000 value: 48.128 - type: map_at_3 value: 44.186 - type: map_at_5 value: 45.788000000000004 - type: mrr_at_1 value: 41.254000000000005 - type: mrr_at_10 value: 50.556999999999995 - type: mrr_at_100 value: 51.296 - type: mrr_at_1000 value: 51.331 - type: mrr_at_3 value: 48.318 - type: mrr_at_5 value: 49.619 - type: ndcg_at_1 value: 41.254000000000005 - type: ndcg_at_10 value: 52.454 - type: ndcg_at_100 value: 56.776 - type: ndcg_at_1000 value: 58.181000000000004 - type: ndcg_at_3 value: 47.713 - type: ndcg_at_5 value: 49.997 - type: precision_at_1 value: 41.254000000000005 - type: precision_at_10 value: 8.464 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 21.526 - type: precision_at_5 value: 14.696000000000002 - type: recall_at_1 value: 35.778999999999996 - type: recall_at_10 value: 64.85300000000001 - type: recall_at_100 value: 83.98400000000001 - type: recall_at_1000 value: 94.18299999999999 - type: recall_at_3 value: 51.929 - type: recall_at_5 value: 57.666 - type: map_at_1 value: 21.719 - type: map_at_10 value: 29.326999999999998 - type: map_at_100 value: 30.314000000000004 - type: map_at_1000 value: 30.397000000000002 - type: map_at_3 value: 27.101 - type: map_at_5 value: 28.141 - type: mrr_at_1 value: 23.503 - type: mrr_at_10 value: 31.225 - type: mrr_at_100 value: 32.096000000000004 - type: mrr_at_1000 value: 32.159 - type: mrr_at_3 value: 29.076999999999998 - type: mrr_at_5 value: 30.083 - type: ndcg_at_1 value: 23.503 - type: ndcg_at_10 value: 33.842 - type: ndcg_at_100 value: 39.038000000000004 - type: ndcg_at_1000 value: 41.214 - type: ndcg_at_3 value: 29.347 - type: ndcg_at_5 value: 31.121 - type: precision_at_1 value: 23.503 - type: precision_at_10 value: 5.266 - type: precision_at_100 value: 0.831 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 12.504999999999999 - type: precision_at_5 value: 8.565000000000001 - type: recall_at_1 value: 21.719 - type: recall_at_10 value: 46.024 - type: recall_at_100 value: 70.78999999999999 - type: recall_at_1000 value: 87.022 - type: recall_at_3 value: 33.64 - type: recall_at_5 value: 37.992 - type: map_at_1 value: 15.601 - type: map_at_10 value: 22.054000000000002 - type: map_at_100 value: 23.177 - type: map_at_1000 value: 23.308 - type: map_at_3 value: 19.772000000000002 - type: map_at_5 value: 21.055 - type: mrr_at_1 value: 19.403000000000002 - type: mrr_at_10 value: 26.409 - type: mrr_at_100 value: 27.356 - type: mrr_at_1000 value: 27.441 - type: mrr_at_3 value: 24.108999999999998 - type: mrr_at_5 value: 25.427 - type: ndcg_at_1 value: 19.403000000000002 - type: ndcg_at_10 value: 26.474999999999998 - type: ndcg_at_100 value: 32.086 - type: ndcg_at_1000 value: 35.231 - type: ndcg_at_3 value: 22.289 - type: ndcg_at_5 value: 24.271 - type: precision_at_1 value: 19.403000000000002 - type: precision_at_10 value: 4.813 - type: precision_at_100 value: 0.8869999999999999 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 10.531 - type: precision_at_5 value: 7.710999999999999 - type: recall_at_1 value: 15.601 - type: recall_at_10 value: 35.916 - type: recall_at_100 value: 60.8 - type: recall_at_1000 value: 83.245 - type: recall_at_3 value: 24.321 - type: recall_at_5 value: 29.372999999999998 - type: map_at_1 value: 25.522 - type: map_at_10 value: 34.854 - type: map_at_100 value: 36.269 - type: map_at_1000 value: 36.387 - type: map_at_3 value: 32.187 - type: map_at_5 value: 33.692 - type: mrr_at_1 value: 31.375999999999998 - type: mrr_at_10 value: 40.471000000000004 - type: mrr_at_100 value: 41.481 - type: mrr_at_1000 value: 41.533 - type: mrr_at_3 value: 38.274 - type: mrr_at_5 value: 39.612 - type: ndcg_at_1 value: 31.375999999999998 - type: ndcg_at_10 value: 40.298 - type: ndcg_at_100 value: 46.255 - type: ndcg_at_1000 value: 48.522 - type: ndcg_at_3 value: 36.049 - type: ndcg_at_5 value: 38.095 - type: precision_at_1 value: 31.375999999999998 - type: precision_at_10 value: 7.305000000000001 - type: precision_at_100 value: 1.201 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 17.132 - type: precision_at_5 value: 12.107999999999999 - type: recall_at_1 value: 25.522 - type: recall_at_10 value: 50.988 - type: recall_at_100 value: 76.005 - type: recall_at_1000 value: 91.11200000000001 - type: recall_at_3 value: 38.808 - type: recall_at_5 value: 44.279 - type: map_at_1 value: 24.615000000000002 - type: map_at_10 value: 32.843 - type: map_at_100 value: 34.172999999999995 - type: map_at_1000 value: 34.286 - type: map_at_3 value: 30.125 - type: map_at_5 value: 31.495 - type: mrr_at_1 value: 30.023 - type: mrr_at_10 value: 38.106 - type: mrr_at_100 value: 39.01 - type: mrr_at_1000 value: 39.071 - type: mrr_at_3 value: 35.674 - type: mrr_at_5 value: 36.924 - type: ndcg_at_1 value: 30.023 - type: ndcg_at_10 value: 38.091 - type: ndcg_at_100 value: 43.771 - type: ndcg_at_1000 value: 46.315 - type: ndcg_at_3 value: 33.507 - type: ndcg_at_5 value: 35.304 - type: precision_at_1 value: 30.023 - type: precision_at_10 value: 6.837999999999999 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 15.562999999999999 - type: precision_at_5 value: 10.936 - type: recall_at_1 value: 24.615000000000002 - type: recall_at_10 value: 48.691 - type: recall_at_100 value: 72.884 - type: recall_at_1000 value: 90.387 - type: recall_at_3 value: 35.659 - type: recall_at_5 value: 40.602 - type: map_at_1 value: 23.223666666666666 - type: map_at_10 value: 31.338166666666673 - type: map_at_100 value: 32.47358333333333 - type: map_at_1000 value: 32.5955 - type: map_at_3 value: 28.84133333333333 - type: map_at_5 value: 30.20808333333333 - type: mrr_at_1 value: 27.62483333333333 - type: mrr_at_10 value: 35.385916666666674 - type: mrr_at_100 value: 36.23325 - type: mrr_at_1000 value: 36.29966666666667 - type: mrr_at_3 value: 33.16583333333333 - type: mrr_at_5 value: 34.41983333333334 - type: ndcg_at_1 value: 27.62483333333333 - type: ndcg_at_10 value: 36.222 - type: ndcg_at_100 value: 41.29491666666666 - type: ndcg_at_1000 value: 43.85508333333333 - type: ndcg_at_3 value: 31.95116666666667 - type: ndcg_at_5 value: 33.88541666666667 - type: precision_at_1 value: 27.62483333333333 - type: precision_at_10 value: 6.339916666666667 - type: precision_at_100 value: 1.0483333333333333 - type: precision_at_1000 value: 0.14608333333333334 - type: precision_at_3 value: 14.726500000000003 - type: precision_at_5 value: 10.395 - type: recall_at_1 value: 23.223666666666666 - type: recall_at_10 value: 46.778999999999996 - type: recall_at_100 value: 69.27141666666667 - type: recall_at_1000 value: 87.27383333333334 - type: recall_at_3 value: 34.678749999999994 - type: recall_at_5 value: 39.79900000000001 - type: map_at_1 value: 21.677 - type: map_at_10 value: 27.828000000000003 - type: map_at_100 value: 28.538999999999998 - type: map_at_1000 value: 28.64 - type: map_at_3 value: 26.105 - type: map_at_5 value: 27.009 - type: mrr_at_1 value: 24.387 - type: mrr_at_10 value: 30.209999999999997 - type: mrr_at_100 value: 30.953000000000003 - type: mrr_at_1000 value: 31.029 - type: mrr_at_3 value: 28.707 - type: mrr_at_5 value: 29.610999999999997 - type: ndcg_at_1 value: 24.387 - type: ndcg_at_10 value: 31.378 - type: ndcg_at_100 value: 35.249 - type: ndcg_at_1000 value: 37.923 - type: ndcg_at_3 value: 28.213 - type: ndcg_at_5 value: 29.658 - type: precision_at_1 value: 24.387 - type: precision_at_10 value: 4.8309999999999995 - type: precision_at_100 value: 0.73 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 12.168 - type: precision_at_5 value: 8.251999999999999 - type: recall_at_1 value: 21.677 - type: recall_at_10 value: 40.069 - type: recall_at_100 value: 58.077 - type: recall_at_1000 value: 77.97 - type: recall_at_3 value: 31.03 - type: recall_at_5 value: 34.838 - type: map_at_1 value: 14.484 - type: map_at_10 value: 20.355 - type: map_at_100 value: 21.382 - type: map_at_1000 value: 21.511 - type: map_at_3 value: 18.448 - type: map_at_5 value: 19.451999999999998 - type: mrr_at_1 value: 17.584 - type: mrr_at_10 value: 23.825 - type: mrr_at_100 value: 24.704 - type: mrr_at_1000 value: 24.793000000000003 - type: mrr_at_3 value: 21.92 - type: mrr_at_5 value: 22.97 - type: ndcg_at_1 value: 17.584 - type: ndcg_at_10 value: 24.315 - type: ndcg_at_100 value: 29.354999999999997 - type: ndcg_at_1000 value: 32.641999999999996 - type: ndcg_at_3 value: 20.802 - type: ndcg_at_5 value: 22.335 - type: precision_at_1 value: 17.584 - type: precision_at_10 value: 4.443 - type: precision_at_100 value: 0.8160000000000001 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 9.807 - type: precision_at_5 value: 7.0889999999999995 - type: recall_at_1 value: 14.484 - type: recall_at_10 value: 32.804 - type: recall_at_100 value: 55.679 - type: recall_at_1000 value: 79.63 - type: recall_at_3 value: 22.976 - type: recall_at_5 value: 26.939 - type: map_at_1 value: 22.983999999999998 - type: map_at_10 value: 30.812 - type: map_at_100 value: 31.938 - type: map_at_1000 value: 32.056000000000004 - type: map_at_3 value: 28.449999999999996 - type: map_at_5 value: 29.542 - type: mrr_at_1 value: 27.145999999999997 - type: mrr_at_10 value: 34.782999999999994 - type: mrr_at_100 value: 35.699 - type: mrr_at_1000 value: 35.768 - type: mrr_at_3 value: 32.572 - type: mrr_at_5 value: 33.607 - type: ndcg_at_1 value: 27.145999999999997 - type: ndcg_at_10 value: 35.722 - type: ndcg_at_100 value: 40.964 - type: ndcg_at_1000 value: 43.598 - type: ndcg_at_3 value: 31.379 - type: ndcg_at_5 value: 32.924 - type: precision_at_1 value: 27.145999999999997 - type: precision_at_10 value: 6.063000000000001 - type: precision_at_100 value: 0.9730000000000001 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 14.366000000000001 - type: precision_at_5 value: 9.776 - type: recall_at_1 value: 22.983999999999998 - type: recall_at_10 value: 46.876 - type: recall_at_100 value: 69.646 - type: recall_at_1000 value: 88.305 - type: recall_at_3 value: 34.471000000000004 - type: recall_at_5 value: 38.76 - type: map_at_1 value: 23.017000000000003 - type: map_at_10 value: 31.049 - type: map_at_100 value: 32.582 - type: map_at_1000 value: 32.817 - type: map_at_3 value: 28.303 - type: map_at_5 value: 29.854000000000003 - type: mrr_at_1 value: 27.866000000000003 - type: mrr_at_10 value: 35.56 - type: mrr_at_100 value: 36.453 - type: mrr_at_1000 value: 36.519 - type: mrr_at_3 value: 32.938 - type: mrr_at_5 value: 34.391 - type: ndcg_at_1 value: 27.866000000000003 - type: ndcg_at_10 value: 36.506 - type: ndcg_at_100 value: 42.344 - type: ndcg_at_1000 value: 45.213 - type: ndcg_at_3 value: 31.805 - type: ndcg_at_5 value: 33.933 - type: precision_at_1 value: 27.866000000000003 - type: precision_at_10 value: 7.016 - type: precision_at_100 value: 1.468 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 14.822 - type: precision_at_5 value: 10.791 - type: recall_at_1 value: 23.017000000000003 - type: recall_at_10 value: 47.053 - type: recall_at_100 value: 73.177 - type: recall_at_1000 value: 91.47800000000001 - type: recall_at_3 value: 33.675 - type: recall_at_5 value: 39.36 - type: map_at_1 value: 16.673 - type: map_at_10 value: 24.051000000000002 - type: map_at_100 value: 24.933 - type: map_at_1000 value: 25.06 - type: map_at_3 value: 21.446 - type: map_at_5 value: 23.064 - type: mrr_at_1 value: 18.115000000000002 - type: mrr_at_10 value: 25.927 - type: mrr_at_100 value: 26.718999999999998 - type: mrr_at_1000 value: 26.817999999999998 - type: mrr_at_3 value: 23.383000000000003 - type: mrr_at_5 value: 25.008999999999997 - type: ndcg_at_1 value: 18.115000000000002 - type: ndcg_at_10 value: 28.669 - type: ndcg_at_100 value: 33.282000000000004 - type: ndcg_at_1000 value: 36.481 - type: ndcg_at_3 value: 23.574 - type: ndcg_at_5 value: 26.340000000000003 - type: precision_at_1 value: 18.115000000000002 - type: precision_at_10 value: 4.769 - type: precision_at_100 value: 0.767 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 10.351 - type: precision_at_5 value: 7.8 - type: recall_at_1 value: 16.673 - type: recall_at_10 value: 41.063 - type: recall_at_100 value: 62.851 - type: recall_at_1000 value: 86.701 - type: recall_at_3 value: 27.532 - type: recall_at_5 value: 34.076 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 8.752 - type: map_at_10 value: 15.120000000000001 - type: map_at_100 value: 16.678 - type: map_at_1000 value: 16.854 - type: map_at_3 value: 12.603 - type: map_at_5 value: 13.918 - type: mrr_at_1 value: 19.283 - type: mrr_at_10 value: 29.145 - type: mrr_at_100 value: 30.281000000000002 - type: mrr_at_1000 value: 30.339 - type: mrr_at_3 value: 26.069 - type: mrr_at_5 value: 27.864 - type: ndcg_at_1 value: 19.283 - type: ndcg_at_10 value: 21.804000000000002 - type: ndcg_at_100 value: 28.576 - type: ndcg_at_1000 value: 32.063 - type: ndcg_at_3 value: 17.511 - type: ndcg_at_5 value: 19.112000000000002 - type: precision_at_1 value: 19.283 - type: precision_at_10 value: 6.873 - type: precision_at_100 value: 1.405 - type: precision_at_1000 value: 0.20500000000000002 - type: precision_at_3 value: 13.16 - type: precision_at_5 value: 10.189 - type: recall_at_1 value: 8.752 - type: recall_at_10 value: 27.004 - type: recall_at_100 value: 50.648 - type: recall_at_1000 value: 70.458 - type: recall_at_3 value: 16.461000000000002 - type: recall_at_5 value: 20.973 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 6.81 - type: map_at_10 value: 14.056 - type: map_at_100 value: 18.961 - type: map_at_1000 value: 20.169 - type: map_at_3 value: 10.496 - type: map_at_5 value: 11.952 - type: mrr_at_1 value: 53.5 - type: mrr_at_10 value: 63.479 - type: mrr_at_100 value: 63.971999999999994 - type: mrr_at_1000 value: 63.993 - type: mrr_at_3 value: 61.541999999999994 - type: mrr_at_5 value: 62.778999999999996 - type: ndcg_at_1 value: 42.25 - type: ndcg_at_10 value: 31.471 - type: ndcg_at_100 value: 35.115 - type: ndcg_at_1000 value: 42.408 - type: ndcg_at_3 value: 35.458 - type: ndcg_at_5 value: 32.973 - type: precision_at_1 value: 53.5 - type: precision_at_10 value: 24.85 - type: precision_at_100 value: 7.79 - type: precision_at_1000 value: 1.599 - type: precision_at_3 value: 38.667 - type: precision_at_5 value: 31.55 - type: recall_at_1 value: 6.81 - type: recall_at_10 value: 19.344 - type: recall_at_100 value: 40.837 - type: recall_at_1000 value: 64.661 - type: recall_at_3 value: 11.942 - type: recall_at_5 value: 14.646 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 44.64499999999999 - type: f1 value: 39.39106911352714 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 48.196 - type: map_at_10 value: 61.404 - type: map_at_100 value: 61.846000000000004 - type: map_at_1000 value: 61.866 - type: map_at_3 value: 58.975 - type: map_at_5 value: 60.525 - type: mrr_at_1 value: 52.025 - type: mrr_at_10 value: 65.43299999999999 - type: mrr_at_100 value: 65.80799999999999 - type: mrr_at_1000 value: 65.818 - type: mrr_at_3 value: 63.146 - type: mrr_at_5 value: 64.64 - type: ndcg_at_1 value: 52.025 - type: ndcg_at_10 value: 67.889 - type: ndcg_at_100 value: 69.864 - type: ndcg_at_1000 value: 70.337 - type: ndcg_at_3 value: 63.315 - type: ndcg_at_5 value: 65.91799999999999 - type: precision_at_1 value: 52.025 - type: precision_at_10 value: 9.182 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 25.968000000000004 - type: precision_at_5 value: 17.006 - type: recall_at_1 value: 48.196 - type: recall_at_10 value: 83.885 - type: recall_at_100 value: 92.671 - type: recall_at_1000 value: 96.018 - type: recall_at_3 value: 71.59 - type: recall_at_5 value: 77.946 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 15.193000000000001 - type: map_at_10 value: 25.168000000000003 - type: map_at_100 value: 27.017000000000003 - type: map_at_1000 value: 27.205000000000002 - type: map_at_3 value: 21.746 - type: map_at_5 value: 23.579 - type: mrr_at_1 value: 31.635999999999996 - type: mrr_at_10 value: 40.077 - type: mrr_at_100 value: 41.112 - type: mrr_at_1000 value: 41.160999999999994 - type: mrr_at_3 value: 37.937 - type: mrr_at_5 value: 39.18 - type: ndcg_at_1 value: 31.635999999999996 - type: ndcg_at_10 value: 32.298 - type: ndcg_at_100 value: 39.546 - type: ndcg_at_1000 value: 42.88 - type: ndcg_at_3 value: 29.221999999999998 - type: ndcg_at_5 value: 30.069000000000003 - type: precision_at_1 value: 31.635999999999996 - type: precision_at_10 value: 9.367 - type: precision_at_100 value: 1.645 - type: precision_at_1000 value: 0.22399999999999998 - type: precision_at_3 value: 20.01 - type: precision_at_5 value: 14.753 - type: recall_at_1 value: 15.193000000000001 - type: recall_at_10 value: 38.214999999999996 - type: recall_at_100 value: 65.95 - type: recall_at_1000 value: 85.85300000000001 - type: recall_at_3 value: 26.357000000000003 - type: recall_at_5 value: 31.319999999999997 - task: type: Retrieval dataset: name: MTEB GerDaLIR type: jinaai/ger_da_lir config: default split: test revision: None metrics: - type: map_at_1 value: 10.363 - type: map_at_10 value: 16.222 - type: map_at_100 value: 17.28 - type: map_at_1000 value: 17.380000000000003 - type: map_at_3 value: 14.054 - type: map_at_5 value: 15.203 - type: mrr_at_1 value: 11.644 - type: mrr_at_10 value: 17.625 - type: mrr_at_100 value: 18.608 - type: mrr_at_1000 value: 18.695999999999998 - type: mrr_at_3 value: 15.481 - type: mrr_at_5 value: 16.659 - type: ndcg_at_1 value: 11.628 - type: ndcg_at_10 value: 20.028000000000002 - type: ndcg_at_100 value: 25.505 - type: ndcg_at_1000 value: 28.288000000000004 - type: ndcg_at_3 value: 15.603 - type: ndcg_at_5 value: 17.642 - type: precision_at_1 value: 11.628 - type: precision_at_10 value: 3.5589999999999997 - type: precision_at_100 value: 0.664 - type: precision_at_1000 value: 0.092 - type: precision_at_3 value: 7.109999999999999 - type: precision_at_5 value: 5.401 - type: recall_at_1 value: 10.363 - type: recall_at_10 value: 30.586000000000002 - type: recall_at_100 value: 56.43 - type: recall_at_1000 value: 78.142 - type: recall_at_3 value: 18.651 - type: recall_at_5 value: 23.493 - task: type: Retrieval dataset: name: MTEB GermanDPR type: deepset/germandpr config: default split: test revision: 5129d02422a66be600ac89cd3e8531b4f97d347d metrics: - type: map_at_1 value: 60.78 - type: map_at_10 value: 73.91499999999999 - type: map_at_100 value: 74.089 - type: map_at_1000 value: 74.09400000000001 - type: map_at_3 value: 71.87 - type: map_at_5 value: 73.37700000000001 - type: mrr_at_1 value: 60.78 - type: mrr_at_10 value: 73.91499999999999 - type: mrr_at_100 value: 74.089 - type: mrr_at_1000 value: 74.09400000000001 - type: mrr_at_3 value: 71.87 - type: mrr_at_5 value: 73.37700000000001 - type: ndcg_at_1 value: 60.78 - type: ndcg_at_10 value: 79.35600000000001 - type: ndcg_at_100 value: 80.077 - type: ndcg_at_1000 value: 80.203 - type: ndcg_at_3 value: 75.393 - type: ndcg_at_5 value: 78.077 - type: precision_at_1 value: 60.78 - type: precision_at_10 value: 9.59 - type: precision_at_100 value: 0.9900000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 28.52 - type: precision_at_5 value: 18.4 - type: recall_at_1 value: 60.78 - type: recall_at_10 value: 95.902 - type: recall_at_100 value: 99.024 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 85.56099999999999 - type: recall_at_5 value: 92.0 - task: type: STS dataset: name: MTEB GermanSTSBenchmark type: jinaai/german-STSbenchmark config: default split: test revision: 49d9b423b996fea62b483f9ee6dfb5ec233515ca metrics: - type: cos_sim_pearson value: 88.49524420894356 - type: cos_sim_spearman value: 88.32407839427714 - type: euclidean_pearson value: 87.25098779877104 - type: euclidean_spearman value: 88.22738098593608 - type: manhattan_pearson value: 87.23872691839607 - type: manhattan_spearman value: 88.2002968380165 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 31.81 - type: map_at_10 value: 46.238 - type: map_at_100 value: 47.141 - type: map_at_1000 value: 47.213 - type: map_at_3 value: 43.248999999999995 - type: map_at_5 value: 45.078 - type: mrr_at_1 value: 63.619 - type: mrr_at_10 value: 71.279 - type: mrr_at_100 value: 71.648 - type: mrr_at_1000 value: 71.665 - type: mrr_at_3 value: 69.76599999999999 - type: mrr_at_5 value: 70.743 - type: ndcg_at_1 value: 63.619 - type: ndcg_at_10 value: 55.38999999999999 - type: ndcg_at_100 value: 58.80800000000001 - type: ndcg_at_1000 value: 60.331999999999994 - type: ndcg_at_3 value: 50.727 - type: ndcg_at_5 value: 53.284 - type: precision_at_1 value: 63.619 - type: precision_at_10 value: 11.668000000000001 - type: precision_at_100 value: 1.434 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 32.001000000000005 - type: precision_at_5 value: 21.223 - type: recall_at_1 value: 31.81 - type: recall_at_10 value: 58.339 - type: recall_at_100 value: 71.708 - type: recall_at_1000 value: 81.85 - type: recall_at_3 value: 48.001 - type: recall_at_5 value: 53.059 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 68.60640000000001 - type: ap value: 62.84296904042086 - type: f1 value: 68.50643633327537 - task: type: Reranking dataset: name: MTEB MIRACL type: jinaai/miracl config: default split: test revision: 8741c3b61cd36ed9ca1b3d4203543a41793239e2 metrics: - type: map value: 64.29704335389768 - type: mrr value: 72.11962197159565 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.3844049247606 - type: f1 value: 89.2124328528015 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.36855452240067 - type: f1 value: 87.35458822097442 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 66.48654810761514 - type: f1 value: 50.07229882504409 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 63.832065370526905 - type: f1 value: 46.283579383385806 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.89038332212509 - type: f1 value: 61.86279849685129 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.11230665770006 - type: f1 value: 67.44780095350535 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.25084061869536 - type: f1 value: 71.43965023016408 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.73907195696032 - type: f1 value: 73.69920814839061 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.32577306498249 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.759349326367783 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.401342674703425 - type: mrr value: 31.384379585660987 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.855 - type: map_at_10 value: 10.01 - type: map_at_100 value: 12.461 - type: map_at_1000 value: 13.776 - type: map_at_3 value: 7.252 - type: map_at_5 value: 8.679 - type: mrr_at_1 value: 41.176 - type: mrr_at_10 value: 49.323 - type: mrr_at_100 value: 49.954 - type: mrr_at_1000 value: 49.997 - type: mrr_at_3 value: 46.904 - type: mrr_at_5 value: 48.375 - type: ndcg_at_1 value: 39.318999999999996 - type: ndcg_at_10 value: 28.607 - type: ndcg_at_100 value: 26.554 - type: ndcg_at_1000 value: 35.731 - type: ndcg_at_3 value: 32.897999999999996 - type: ndcg_at_5 value: 31.53 - type: precision_at_1 value: 41.176 - type: precision_at_10 value: 20.867 - type: precision_at_100 value: 6.796 - type: precision_at_1000 value: 1.983 - type: precision_at_3 value: 30.547 - type: precision_at_5 value: 27.245 - type: recall_at_1 value: 4.855 - type: recall_at_10 value: 14.08 - type: recall_at_100 value: 28.188000000000002 - type: recall_at_1000 value: 60.07900000000001 - type: recall_at_3 value: 7.947 - type: recall_at_5 value: 10.786 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 26.906999999999996 - type: map_at_10 value: 41.147 - type: map_at_100 value: 42.269 - type: map_at_1000 value: 42.308 - type: map_at_3 value: 36.638999999999996 - type: map_at_5 value: 39.285 - type: mrr_at_1 value: 30.359 - type: mrr_at_10 value: 43.607 - type: mrr_at_100 value: 44.454 - type: mrr_at_1000 value: 44.481 - type: mrr_at_3 value: 39.644 - type: mrr_at_5 value: 42.061 - type: ndcg_at_1 value: 30.330000000000002 - type: ndcg_at_10 value: 48.899 - type: ndcg_at_100 value: 53.612 - type: ndcg_at_1000 value: 54.51200000000001 - type: ndcg_at_3 value: 40.262 - type: ndcg_at_5 value: 44.787 - type: precision_at_1 value: 30.330000000000002 - type: precision_at_10 value: 8.323 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 18.395 - type: precision_at_5 value: 13.627 - type: recall_at_1 value: 26.906999999999996 - type: recall_at_10 value: 70.215 - type: recall_at_100 value: 90.61200000000001 - type: recall_at_1000 value: 97.294 - type: recall_at_3 value: 47.784 - type: recall_at_5 value: 58.251 - task: type: PairClassification dataset: name: MTEB PawsX type: paws-x config: default split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 60.5 - type: cos_sim_ap value: 57.606096528877494 - type: cos_sim_f1 value: 62.24240307369892 - type: cos_sim_precision value: 45.27439024390244 - type: cos_sim_recall value: 99.55307262569832 - type: dot_accuracy value: 57.699999999999996 - type: dot_ap value: 51.289351057160616 - type: dot_f1 value: 62.25953130465197 - type: dot_precision value: 45.31568228105906 - type: dot_recall value: 99.4413407821229 - type: euclidean_accuracy value: 60.45 - type: euclidean_ap value: 57.616461421424034 - type: euclidean_f1 value: 62.313697657913416 - type: euclidean_precision value: 45.657826313052524 - type: euclidean_recall value: 98.10055865921787 - type: manhattan_accuracy value: 60.3 - type: manhattan_ap value: 57.580565271667325 - type: manhattan_f1 value: 62.24240307369892 - type: manhattan_precision value: 45.27439024390244 - type: manhattan_recall value: 99.55307262569832 - type: max_accuracy value: 60.5 - type: max_ap value: 57.616461421424034 - type: max_f1 value: 62.313697657913416 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.21300000000001 - type: map_at_10 value: 84.136 - type: map_at_100 value: 84.796 - type: map_at_1000 value: 84.812 - type: map_at_3 value: 81.182 - type: map_at_5 value: 83.027 - type: mrr_at_1 value: 80.91000000000001 - type: mrr_at_10 value: 87.155 - type: mrr_at_100 value: 87.27000000000001 - type: mrr_at_1000 value: 87.271 - type: mrr_at_3 value: 86.158 - type: mrr_at_5 value: 86.828 - type: ndcg_at_1 value: 80.88 - type: ndcg_at_10 value: 87.926 - type: ndcg_at_100 value: 89.223 - type: ndcg_at_1000 value: 89.321 - type: ndcg_at_3 value: 85.036 - type: ndcg_at_5 value: 86.614 - type: precision_at_1 value: 80.88 - type: precision_at_10 value: 13.350000000000001 - type: precision_at_100 value: 1.5310000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.173 - type: precision_at_5 value: 24.476 - type: recall_at_1 value: 70.21300000000001 - type: recall_at_10 value: 95.12 - type: recall_at_100 value: 99.535 - type: recall_at_1000 value: 99.977 - type: recall_at_3 value: 86.833 - type: recall_at_5 value: 91.26100000000001 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 47.754688783184875 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 54.875736374329364 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 3.773 - type: map_at_10 value: 9.447 - type: map_at_100 value: 11.1 - type: map_at_1000 value: 11.37 - type: map_at_3 value: 6.787 - type: map_at_5 value: 8.077 - type: mrr_at_1 value: 18.5 - type: mrr_at_10 value: 28.227000000000004 - type: mrr_at_100 value: 29.445 - type: mrr_at_1000 value: 29.515 - type: mrr_at_3 value: 25.2 - type: mrr_at_5 value: 27.055 - type: ndcg_at_1 value: 18.5 - type: ndcg_at_10 value: 16.29 - type: ndcg_at_100 value: 23.250999999999998 - type: ndcg_at_1000 value: 28.445999999999998 - type: ndcg_at_3 value: 15.376000000000001 - type: ndcg_at_5 value: 13.528 - type: precision_at_1 value: 18.5 - type: precision_at_10 value: 8.51 - type: precision_at_100 value: 1.855 - type: precision_at_1000 value: 0.311 - type: precision_at_3 value: 14.533 - type: precision_at_5 value: 12.0 - type: recall_at_1 value: 3.773 - type: recall_at_10 value: 17.282 - type: recall_at_100 value: 37.645 - type: recall_at_1000 value: 63.138000000000005 - type: recall_at_3 value: 8.853 - type: recall_at_5 value: 12.168 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.32789517976525 - type: cos_sim_spearman value: 80.32750384145629 - type: euclidean_pearson value: 81.5025131452508 - type: euclidean_spearman value: 80.24797115147175 - type: manhattan_pearson value: 81.51634463412002 - type: manhattan_spearman value: 80.24614721495055 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 88.47050448992432 - type: cos_sim_spearman value: 80.58919997743621 - type: euclidean_pearson value: 85.83258918113664 - type: euclidean_spearman value: 80.97441389240902 - type: manhattan_pearson value: 85.7798262013878 - type: manhattan_spearman value: 80.97208703064196 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.95341439711532 - type: cos_sim_spearman value: 86.59127484634989 - type: euclidean_pearson value: 85.57850603454227 - type: euclidean_spearman value: 86.47130477363419 - type: manhattan_pearson value: 85.59387925447652 - type: manhattan_spearman value: 86.50665427391583 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.39810909161844 - type: cos_sim_spearman value: 82.98595295546008 - type: euclidean_pearson value: 84.04681129969951 - type: euclidean_spearman value: 82.98197460689866 - type: manhattan_pearson value: 83.9918798171185 - type: manhattan_spearman value: 82.91148131768082 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.02072712147692 - type: cos_sim_spearman value: 88.78821332623012 - type: euclidean_pearson value: 88.12132045572747 - type: euclidean_spearman value: 88.74273451067364 - type: manhattan_pearson value: 88.05431550059166 - type: manhattan_spearman value: 88.67610233020723 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.96134704624787 - type: cos_sim_spearman value: 84.44062976314666 - type: euclidean_pearson value: 84.03642536310323 - type: euclidean_spearman value: 84.4535014579785 - type: manhattan_pearson value: 83.92874228901483 - type: manhattan_spearman value: 84.33634314951631 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.3154168064887 - type: cos_sim_spearman value: 86.72393652571682 - type: euclidean_pearson value: 86.04193246174164 - type: euclidean_spearman value: 86.30482896608093 - type: manhattan_pearson value: 85.95524084651859 - type: manhattan_spearman value: 86.06031431994282 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.91079682750804 - type: cos_sim_spearman value: 89.30961836617064 - type: euclidean_pearson value: 88.86249564158628 - type: euclidean_spearman value: 89.04772899592396 - type: manhattan_pearson value: 88.85579791315043 - type: manhattan_spearman value: 88.94190462541333 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.00558145551088 - type: cos_sim_spearman value: 67.96601170393878 - type: euclidean_pearson value: 67.87627043214336 - type: euclidean_spearman value: 66.76402572303859 - type: manhattan_pearson value: 67.88306560555452 - type: manhattan_spearman value: 66.6273862035506 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 50.83759332748726 - type: cos_sim_spearman value: 59.066344562858006 - type: euclidean_pearson value: 50.08955848154131 - type: euclidean_spearman value: 58.36517305855221 - type: manhattan_pearson value: 50.05257267223111 - type: manhattan_spearman value: 58.37570252804986 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.22749007956492 - type: cos_sim_spearman value: 55.97282077657827 - type: euclidean_pearson value: 62.10661533695752 - type: euclidean_spearman value: 53.62780854854067 - type: manhattan_pearson value: 62.37138085709719 - type: manhattan_spearman value: 54.17556356828155 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.91145397065878 - type: cos_sim_spearman value: 88.13960018389005 - type: euclidean_pearson value: 87.67618876224006 - type: euclidean_spearman value: 87.99119480810556 - type: manhattan_pearson value: 87.67920297334753 - type: manhattan_spearman value: 87.99113250064492 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 78.09133563707582 - type: mrr value: 93.2415288052543 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 47.760999999999996 - type: map_at_10 value: 56.424 - type: map_at_100 value: 57.24399999999999 - type: map_at_1000 value: 57.278 - type: map_at_3 value: 53.68000000000001 - type: map_at_5 value: 55.442 - type: mrr_at_1 value: 50.666999999999994 - type: mrr_at_10 value: 58.012 - type: mrr_at_100 value: 58.736 - type: mrr_at_1000 value: 58.769000000000005 - type: mrr_at_3 value: 56.056 - type: mrr_at_5 value: 57.321999999999996 - type: ndcg_at_1 value: 50.666999999999994 - type: ndcg_at_10 value: 60.67700000000001 - type: ndcg_at_100 value: 64.513 - type: ndcg_at_1000 value: 65.62400000000001 - type: ndcg_at_3 value: 56.186 - type: ndcg_at_5 value: 58.692 - type: precision_at_1 value: 50.666999999999994 - type: precision_at_10 value: 8.200000000000001 - type: precision_at_100 value: 1.023 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 21.889 - type: precision_at_5 value: 14.866999999999999 - type: recall_at_1 value: 47.760999999999996 - type: recall_at_10 value: 72.006 - type: recall_at_100 value: 89.767 - type: recall_at_1000 value: 98.833 - type: recall_at_3 value: 60.211000000000006 - type: recall_at_5 value: 66.3 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.79009900990098 - type: cos_sim_ap value: 94.86690691995835 - type: cos_sim_f1 value: 89.37875751503007 - type: cos_sim_precision value: 89.5582329317269 - type: cos_sim_recall value: 89.2 - type: dot_accuracy value: 99.76336633663367 - type: dot_ap value: 94.26453740761586 - type: dot_f1 value: 88.00783162016641 - type: dot_precision value: 86.19367209971237 - type: dot_recall value: 89.9 - type: euclidean_accuracy value: 99.7940594059406 - type: euclidean_ap value: 94.85459757524379 - type: euclidean_f1 value: 89.62779156327544 - type: euclidean_precision value: 88.96551724137932 - type: euclidean_recall value: 90.3 - type: manhattan_accuracy value: 99.79009900990098 - type: manhattan_ap value: 94.76971336654465 - type: manhattan_f1 value: 89.35323383084577 - type: manhattan_precision value: 88.91089108910892 - type: manhattan_recall value: 89.8 - type: max_accuracy value: 99.7940594059406 - type: max_ap value: 94.86690691995835 - type: max_f1 value: 89.62779156327544 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 55.38197670064987 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.08330158937971 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.50367079063226 - type: mrr value: 50.30444943128768 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.37739520909561 - type: cos_sim_spearman value: 31.548500943973913 - type: dot_pearson value: 29.983610104303 - type: dot_spearman value: 29.90185869098618 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.198 - type: map_at_10 value: 1.5810000000000002 - type: map_at_100 value: 9.064 - type: map_at_1000 value: 22.161 - type: map_at_3 value: 0.536 - type: map_at_5 value: 0.8370000000000001 - type: mrr_at_1 value: 80.0 - type: mrr_at_10 value: 86.75 - type: mrr_at_100 value: 86.799 - type: mrr_at_1000 value: 86.799 - type: mrr_at_3 value: 85.0 - type: mrr_at_5 value: 86.5 - type: ndcg_at_1 value: 73.0 - type: ndcg_at_10 value: 65.122 - type: ndcg_at_100 value: 51.853 - type: ndcg_at_1000 value: 47.275 - type: ndcg_at_3 value: 66.274 - type: ndcg_at_5 value: 64.826 - type: precision_at_1 value: 80.0 - type: precision_at_10 value: 70.19999999999999 - type: precision_at_100 value: 53.480000000000004 - type: precision_at_1000 value: 20.946 - type: precision_at_3 value: 71.333 - type: precision_at_5 value: 70.0 - type: recall_at_1 value: 0.198 - type: recall_at_10 value: 1.884 - type: recall_at_100 value: 12.57 - type: recall_at_1000 value: 44.208999999999996 - type: recall_at_3 value: 0.5890000000000001 - type: recall_at_5 value: 0.95 - task: type: Clustering dataset: name: MTEB TenKGnadClusteringP2P type: slvnwhrl/tenkgnad-clustering-p2p config: default split: test revision: 5c59e41555244b7e45c9a6be2d720ab4bafae558 metrics: - type: v_measure value: 42.84199261133083 - task: type: Clustering dataset: name: MTEB TenKGnadClusteringS2S type: slvnwhrl/tenkgnad-clustering-s2s config: default split: test revision: 6cddbe003f12b9b140aec477b583ac4191f01786 metrics: - type: v_measure value: 23.689557114798838 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.941 - type: map_at_10 value: 8.222 - type: map_at_100 value: 14.277999999999999 - type: map_at_1000 value: 15.790000000000001 - type: map_at_3 value: 4.4670000000000005 - type: map_at_5 value: 5.762 - type: mrr_at_1 value: 24.490000000000002 - type: mrr_at_10 value: 38.784 - type: mrr_at_100 value: 39.724 - type: mrr_at_1000 value: 39.724 - type: mrr_at_3 value: 33.333 - type: mrr_at_5 value: 37.415 - type: ndcg_at_1 value: 22.448999999999998 - type: ndcg_at_10 value: 21.026 - type: ndcg_at_100 value: 33.721000000000004 - type: ndcg_at_1000 value: 45.045 - type: ndcg_at_3 value: 20.053 - type: ndcg_at_5 value: 20.09 - type: precision_at_1 value: 24.490000000000002 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.469 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 21.769 - type: precision_at_5 value: 21.224 - type: recall_at_1 value: 1.941 - type: recall_at_10 value: 14.915999999999999 - type: recall_at_100 value: 46.155 - type: recall_at_1000 value: 80.664 - type: recall_at_3 value: 5.629 - type: recall_at_5 value: 8.437 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.64800000000001 - type: ap value: 12.914826731261094 - type: f1 value: 53.05213503422915 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.427277872099594 - type: f1 value: 60.78292007556828 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 40.48134168406559 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.79465935506944 - type: cos_sim_ap value: 70.24589055290592 - type: cos_sim_f1 value: 65.0994575045208 - type: cos_sim_precision value: 63.76518218623482 - type: cos_sim_recall value: 66.49076517150397 - type: dot_accuracy value: 84.63968528342374 - type: dot_ap value: 69.84683095084355 - type: dot_f1 value: 64.50606169727523 - type: dot_precision value: 59.1719885487778 - type: dot_recall value: 70.89709762532982 - type: euclidean_accuracy value: 84.76485664898374 - type: euclidean_ap value: 70.20556438685551 - type: euclidean_f1 value: 65.06796614516543 - type: euclidean_precision value: 63.29840319361277 - type: euclidean_recall value: 66.93931398416886 - type: manhattan_accuracy value: 84.72313286046374 - type: manhattan_ap value: 70.17151475534308 - type: manhattan_f1 value: 65.31379180759113 - type: manhattan_precision value: 62.17505366086334 - type: manhattan_recall value: 68.7862796833773 - type: max_accuracy value: 84.79465935506944 - type: max_ap value: 70.24589055290592 - type: max_f1 value: 65.31379180759113 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.95874568246207 - type: cos_sim_ap value: 85.82517548264127 - type: cos_sim_f1 value: 78.22288041466125 - type: cos_sim_precision value: 75.33875338753387 - type: cos_sim_recall value: 81.33661841700031 - type: dot_accuracy value: 88.836496293709 - type: dot_ap value: 85.53430720252186 - type: dot_f1 value: 78.10616085869725 - type: dot_precision value: 74.73269555430501 - type: dot_recall value: 81.79858330766862 - type: euclidean_accuracy value: 88.92769821865176 - type: euclidean_ap value: 85.65904346964223 - type: euclidean_f1 value: 77.98774074208407 - type: euclidean_precision value: 73.72282795035315 - type: euclidean_recall value: 82.77640899291654 - type: manhattan_accuracy value: 88.86366282454303 - type: manhattan_ap value: 85.61599642231819 - type: manhattan_f1 value: 78.01480509061737 - type: manhattan_precision value: 74.10460685833044 - type: manhattan_recall value: 82.36064059131506 - type: max_accuracy value: 88.95874568246207 - type: max_ap value: 85.82517548264127 - type: max_f1 value: 78.22288041466125 - task: type: Retrieval dataset: name: MTEB WikiCLIR type: None config: default split: test revision: None metrics: - type: map_at_1 value: 3.9539999999999997 - type: map_at_10 value: 7.407 - type: map_at_100 value: 8.677999999999999 - type: map_at_1000 value: 9.077 - type: map_at_3 value: 5.987 - type: map_at_5 value: 6.6979999999999995 - type: mrr_at_1 value: 35.65 - type: mrr_at_10 value: 45.097 - type: mrr_at_100 value: 45.83 - type: mrr_at_1000 value: 45.871 - type: mrr_at_3 value: 42.63 - type: mrr_at_5 value: 44.104 - type: ndcg_at_1 value: 29.215000000000003 - type: ndcg_at_10 value: 22.694 - type: ndcg_at_100 value: 22.242 - type: ndcg_at_1000 value: 27.069 - type: ndcg_at_3 value: 27.641 - type: ndcg_at_5 value: 25.503999999999998 - type: precision_at_1 value: 35.65 - type: precision_at_10 value: 12.795000000000002 - type: precision_at_100 value: 3.354 - type: precision_at_1000 value: 0.743 - type: precision_at_3 value: 23.403 - type: precision_at_5 value: 18.474 - type: recall_at_1 value: 3.9539999999999997 - type: recall_at_10 value: 11.301 - type: recall_at_100 value: 22.919999999999998 - type: recall_at_1000 value: 40.146 - type: recall_at_3 value: 7.146 - type: recall_at_5 value: 8.844000000000001 - task: type: Retrieval dataset: name: MTEB XMarket type: jinaai/xmarket_de config: default split: test revision: 2336818db4c06570fcdf263e1bcb9993b786f67a metrics: - type: map_at_1 value: 4.872 - type: map_at_10 value: 10.658 - type: map_at_100 value: 13.422999999999998 - type: map_at_1000 value: 14.245 - type: map_at_3 value: 7.857 - type: map_at_5 value: 9.142999999999999 - type: mrr_at_1 value: 16.744999999999997 - type: mrr_at_10 value: 24.416 - type: mrr_at_100 value: 25.432 - type: mrr_at_1000 value: 25.502999999999997 - type: mrr_at_3 value: 22.096 - type: mrr_at_5 value: 23.421 - type: ndcg_at_1 value: 16.695999999999998 - type: ndcg_at_10 value: 18.66 - type: ndcg_at_100 value: 24.314 - type: ndcg_at_1000 value: 29.846 - type: ndcg_at_3 value: 17.041999999999998 - type: ndcg_at_5 value: 17.585 - type: precision_at_1 value: 16.695999999999998 - type: precision_at_10 value: 10.374 - type: precision_at_100 value: 3.988 - type: precision_at_1000 value: 1.1860000000000002 - type: precision_at_3 value: 14.21 - type: precision_at_5 value: 12.623000000000001 - type: recall_at_1 value: 4.872 - type: recall_at_10 value: 18.624 - type: recall_at_100 value: 40.988 - type: recall_at_1000 value: 65.33 - type: recall_at_3 value: 10.162 - type: recall_at_5 value: 13.517999999999999 --- <!-- TODO: add evaluation results here --> <br><br> <p align="center"> <img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b> </p> ## Quick Start The easiest way to starting using `jina-embeddings-v2-base-de` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/). ## Intended Usage & Model Info `jina-embeddings-v2-base-de` is a German/English bilingual text **embedding model** supporting **8192 sequence length**. It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. We have designed it for high performance in mono-lingual & cross-lingual applications and trained it specifically to support mixed German-English input without bias. Additionally, we provide the following embedding models: `jina-embeddings-v2-base-de` ist ein zweisprachiges **Text Embedding Modell** für Deutsch und Englisch, welches Texteingaben mit einer Länge von bis zu **8192 Token unterstützt**. Es basiert auf der adaptierten Bert-Modell-Architektur JinaBERT, welche mithilfe einer symmetrische Variante von [ALiBi](https://arxiv.org/abs/2108.12409) längere Eingabetexte erlaubt. Wir haben, das Model für hohe Performance in einsprachigen und cross-lingual Anwendungen entwickelt und speziell darauf trainiert, gemischte deutsch-englische Eingaben ohne einen Bias zu kodieren. Des Weiteren stellen wir folgende Embedding-Modelle bereit: - [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters. - [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters. - [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): 161 million parameters Chinese-English Bilingual embeddings. - [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): 161 million parameters German-English Bilingual embeddings **(you are here)**. - [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings (soon). - [`jina-embeddings-v2-base-code`](https://huggingface.co/jinaai/jina-embeddings-v2-base-code): 161 million parameters code embeddings. ## Data & Parameters The data and training details are described in this [technical report](https://arxiv.org/abs/2402.17016). ## Usage **<details><summary>Please apply mean pooling when integrating the model.</summary>** <p> ### Why mean pooling? `mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level. It has been proved to be the most effective way to produce high-quality sentence embeddings. We offer an `encode` function to deal with this. However, if you would like to do it without using the default `encode` function: ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['How is the weather today?', 'What is the current weather like today?'] tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-de') model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16) encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) ``` </p> </details> You can use Jina Embedding models directly from transformers package. ```python !pip install transformers import torch from transformers import AutoModel from numpy.linalg import norm cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16) embeddings = model.encode(['How is the weather today?', 'Wie ist das Wetter heute?']) print(cos_sim(embeddings[0], embeddings[1])) ``` If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function: ```python embeddings = model.encode( ['Very long ... document'], max_length=2048 ) ``` Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well): ```python !pip install -U sentence-transformers from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( "jinaai/jina-embeddings-v2-base-de", # switch to en/zh for English or Chinese trust_remote_code=True ) # control your input sequence length up to 8192 model.max_seq_length = 1024 embeddings = model.encode([ 'How is the weather today?', 'Wie ist das Wetter heute?' ]) print(cos_sim(embeddings[0], embeddings[1])) ``` ## Alternatives to Using Transformers Package 1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/). 2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy). ## Benchmark Results We evaluated our Bilingual model on all German and English evaluation tasks availble on the [MTEB benchmark](https://huggingface.co/blog/mteb). In addition, we evaluated the models agains a couple of other German, English, and multilingual models on additional German evaluation tasks: <img src="de_evaluation_results.png" width="780px"> ## Use Jina Embeddings for RAG According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83), > In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out. <img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px"> ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` @article{mohr2024multi, title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings}, author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{\"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{\'\i}nez, Joan Fontanals and Wang, Feng and others}, journal={arXiv preprint arXiv:2402.17016}, year={2024} } ```
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
<!-- TODO: add evaluation results here --> <br><br> <p align="center"> <img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b> </p> ## Quick Start The easiest way to starting using `jina-embeddings-v2-base-de` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/). ## Intended Usage & Model Info `jina-embeddings-v2-base-de` is a German/English bilingual text **embedding model** supporting **8192 sequence length**. It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. We have designed it for high performance in mono-lingual & cross-lingual applications and trained it specifically to support mixed German-English input without bias. Additionally, we provide the following embedding models: `jina-embeddings-v2-base-de` ist ein zweisprachiges **Text Embedding Modell** für Deutsch und Englisch, welches Texteingaben mit einer Länge von bis zu **8192 Token unterstützt**. Es basiert auf der adaptierten Bert-Modell-Architektur JinaBERT, welche mithilfe einer symmetrische Variante von [ALiBi](https://arxiv.org/abs/2108.12409) längere Eingabetexte erlaubt. Wir haben, das Model für hohe Performance in einsprachigen und cross-lingual Anwendungen entwickelt und speziell darauf trainiert, gemischte deutsch-englische Eingaben ohne einen Bias zu kodieren. Des Weiteren stellen wir folgende Embedding-Modelle bereit: - [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters. - [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters. - [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): 161 million parameters Chinese-English Bilingual embeddings. - [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): 161 million parameters German-English Bilingual embeddings **(you are here)**. - [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings (soon). - [`jina-embeddings-v2-base-code`](https://huggingface.co/jinaai/jina-embeddings-v2-base-code): 161 million parameters code embeddings. ## Data & Parameters The data and training details are described in this [technical report](https://arxiv.org/abs/2402.17016). ## Usage **<details><summary>Please apply mean pooling when integrating the model.</summary>** <p> ### Why mean pooling? `mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level. It has been proved to be the most effective way to produce high-quality sentence embeddings. We offer an `encode` function to deal with this. However, if you would like to do it without using the default `encode` function: ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['How is the weather today?', 'What is the current weather like today?'] tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-de') model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16) encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) ``` </p> </details> You can use Jina Embedding models directly from transformers package. ```python !pip install transformers import torch from transformers import AutoModel from numpy.linalg import norm cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16) embeddings = model.encode(['How is the weather today?', 'Wie ist das Wetter heute?']) print(cos_sim(embeddings[0], embeddings[1])) ``` If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function: ```python embeddings = model.encode( ['Very long ... document'], max_length=2048 ) ``` Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well): ```python !pip install -U sentence-transformers from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( "jinaai/jina-embeddings-v2-base-de", # switch to en/zh for English or Chinese trust_remote_code=True ) # control your input sequence length up to 8192 model.max_seq_length = 1024 embeddings = model.encode([ 'How is the weather today?', 'Wie ist das Wetter heute?' ]) print(cos_sim(embeddings[0], embeddings[1])) ``` ## Alternatives to Using Transformers Package 1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/). 2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy). ## Benchmark Results We evaluated our Bilingual model on all German and English evaluation tasks availble on the [MTEB benchmark](https://huggingface.co/blog/mteb). In addition, we evaluated the models agains a couple of other German, English, and multilingual models on additional German evaluation tasks: <img src="de_evaluation_results.png" width="780px"> ## Use Jina Embeddings for RAG According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83), > In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out. <img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px"> ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` @article{mohr2024multi, title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings}, author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{\"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{\'\i}nez, Joan Fontanals and Wang, Feng and others}, journal={arXiv preprint arXiv:2402.17016}, year={2024} } ```
{"language": ["de", "en"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "transformers", "transformers.js"], "inference": false, "model-index": [{"name": "jina-embeddings-v2-base-de", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 73.76119402985076}, {"type": "ap", "value": 35.99577188521176}, {"type": "f1", "value": 67.50397431543269}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (de)", "type": "mteb/amazon_counterfactual", "config": "de", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 68.9186295503212}, {"type": "ap", "value": 79.73307115840507}, {"type": "f1", "value": 66.66245744831339}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 77.52215}, {"type": "ap", "value": 71.85051037177416}, {"type": "f1", "value": 77.4171096157774}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 38.498}, {"type": "f1", "value": 38.058193386555956}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (de)", "type": "mteb/amazon_reviews_multi", "config": "de", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 37.717999999999996}, {"type": "f1", "value": 37.22674371574757}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 25.319999999999997}, {"type": "map_at_10", "value": 40.351}, {"type": "map_at_100", "value": 41.435}, {"type": "map_at_1000", "value": 41.443000000000005}, {"type": "map_at_3", "value": 35.266}, {"type": "map_at_5", "value": 37.99}, {"type": "mrr_at_1", "value": 25.746999999999996}, {"type": "mrr_at_10", "value": 40.515}, {"type": "mrr_at_100", "value": 41.606}, {"type": "mrr_at_1000", "value": 41.614000000000004}, {"type": "mrr_at_3", "value": 35.42}, {"type": "mrr_at_5", "value": 38.112}, {"type": "ndcg_at_1", "value": 25.319999999999997}, {"type": "ndcg_at_10", "value": 49.332}, {"type": "ndcg_at_100", "value": 53.909}, {"type": "ndcg_at_1000", "value": 54.089}, {"type": "ndcg_at_3", "value": 38.705}, {"type": "ndcg_at_5", "value": 43.606}, {"type": "precision_at_1", "value": 25.319999999999997}, {"type": "precision_at_10", "value": 7.831}, {"type": "precision_at_100", "value": 0.9820000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 16.24}, {"type": "precision_at_5", "value": 12.119}, {"type": "recall_at_1", "value": 25.319999999999997}, {"type": "recall_at_10", "value": 78.307}, {"type": "recall_at_100", "value": 98.222}, {"type": "recall_at_1000", "value": 99.57300000000001}, {"type": "recall_at_3", "value": 48.72}, {"type": "recall_at_5", "value": 60.597}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 41.43100588255654}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 32.08988904593667}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 60.55514765595906}, {"type": "mrr", "value": 73.51393835465858}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.6723823121172}, {"type": "cos_sim_spearman", "value": 76.90596922214986}, {"type": "euclidean_pearson", "value": 77.87910737957918}, {"type": "euclidean_spearman", "value": 76.66319260598262}, {"type": "manhattan_pearson", "value": 77.37039493457965}, {"type": "manhattan_spearman", "value": 76.09872191280964}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (de-en)", "type": "mteb/bucc-bitext-mining", "config": "de-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 98.97703549060543}, {"type": "f1", "value": 98.86569241475296}, {"type": "precision", "value": 98.81002087682673}, {"type": "recall", "value": 98.97703549060543}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 83.93506493506493}, {"type": "f1", "value": 83.91014949949302}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 34.970675877585144}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 28.779230269190954}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BlurbsClusteringP2P", "type": "slvnwhrl/blurbs-clustering-p2p", "config": "default", "split": "test", "revision": "a2dd5b02a77de3466a3eaa98ae586b5610314496"}, "metrics": [{"type": "v_measure", "value": 35.490175601567216}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BlurbsClusteringS2S", "type": "slvnwhrl/blurbs-clustering-s2s", "config": "default", "split": "test", "revision": "9bfff9a7f8f6dc6ffc9da71c48dd48b68696471d"}, "metrics": [{"type": "v_measure", "value": 16.16638280560168}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 30.830999999999996}, {"type": "map_at_10", "value": 41.355}, {"type": "map_at_100", "value": 42.791000000000004}, {"type": "map_at_1000", "value": 42.918}, {"type": "map_at_3", "value": 38.237}, {"type": "map_at_5", "value": 40.066}, {"type": "mrr_at_1", "value": 38.484}, {"type": "mrr_at_10", "value": 47.593}, {"type": "mrr_at_100", "value": 48.388}, {"type": "mrr_at_1000", "value": 48.439}, {"type": "mrr_at_3", "value": 45.279}, {"type": "mrr_at_5", "value": 46.724}, {"type": "ndcg_at_1", "value": 38.484}, {"type": "ndcg_at_10", "value": 47.27}, {"type": "ndcg_at_100", "value": 52.568000000000005}, {"type": "ndcg_at_1000", "value": 54.729000000000006}, {"type": "ndcg_at_3", "value": 43.061}, {"type": "ndcg_at_5", "value": 45.083}, {"type": "precision_at_1", "value": 38.484}, {"type": "precision_at_10", "value": 8.927}, {"type": "precision_at_100", "value": 1.425}, {"type": "precision_at_1000", "value": 0.19}, {"type": "precision_at_3", "value": 20.791999999999998}, {"type": "precision_at_5", "value": 14.85}, {"type": "recall_at_1", "value": 30.830999999999996}, {"type": "recall_at_10", "value": 57.87799999999999}, {"type": "recall_at_100", "value": 80.124}, {"type": "recall_at_1000", "value": 94.208}, {"type": "recall_at_3", "value": 45.083}, {"type": "recall_at_5", "value": 51.154999999999994}, {"type": "map_at_1", "value": 25.782}, {"type": "map_at_10", "value": 34.492}, {"type": "map_at_100", "value": 35.521}, {"type": "map_at_1000", "value": 35.638}, {"type": "map_at_3", "value": 31.735999999999997}, {"type": "map_at_5", "value": 33.339}, {"type": "mrr_at_1", "value": 32.357}, {"type": "mrr_at_10", "value": 39.965}, {"type": "mrr_at_100", "value": 40.644000000000005}, {"type": "mrr_at_1000", "value": 40.695}, {"type": "mrr_at_3", "value": 37.739}, {"type": "mrr_at_5", "value": 39.061}, {"type": "ndcg_at_1", "value": 32.357}, {"type": "ndcg_at_10", "value": 39.644}, {"type": "ndcg_at_100", "value": 43.851}, {"type": "ndcg_at_1000", "value": 46.211999999999996}, {"type": "ndcg_at_3", "value": 35.675000000000004}, {"type": "ndcg_at_5", "value": 37.564}, {"type": "precision_at_1", "value": 32.357}, {"type": "precision_at_10", "value": 7.344}, {"type": "precision_at_100", "value": 1.201}, {"type": "precision_at_1000", "value": 0.168}, {"type": "precision_at_3", "value": 17.155}, {"type": "precision_at_5", "value": 12.166}, {"type": "recall_at_1", "value": 25.782}, {"type": "recall_at_10", "value": 49.132999999999996}, {"type": "recall_at_100", "value": 67.24}, {"type": "recall_at_1000", "value": 83.045}, {"type": "recall_at_3", "value": 37.021}, {"type": "recall_at_5", "value": 42.548}, {"type": "map_at_1", "value": 35.778999999999996}, {"type": "map_at_10", "value": 47.038000000000004}, {"type": "map_at_100", "value": 48.064}, {"type": "map_at_1000", "value": 48.128}, {"type": "map_at_3", "value": 44.186}, {"type": "map_at_5", "value": 45.788000000000004}, {"type": "mrr_at_1", "value": 41.254000000000005}, {"type": "mrr_at_10", "value": 50.556999999999995}, {"type": "mrr_at_100", "value": 51.296}, {"type": "mrr_at_1000", "value": 51.331}, {"type": "mrr_at_3", "value": 48.318}, {"type": "mrr_at_5", "value": 49.619}, {"type": "ndcg_at_1", "value": 41.254000000000005}, {"type": "ndcg_at_10", "value": 52.454}, {"type": "ndcg_at_100", "value": 56.776}, {"type": "ndcg_at_1000", "value": 58.181000000000004}, {"type": "ndcg_at_3", "value": 47.713}, {"type": "ndcg_at_5", "value": 49.997}, {"type": "precision_at_1", "value": 41.254000000000005}, {"type": "precision_at_10", "value": 8.464}, {"type": "precision_at_100", "value": 1.157}, {"type": "precision_at_1000", "value": 0.133}, {"type": "precision_at_3", "value": 21.526}, {"type": "precision_at_5", "value": 14.696000000000002}, {"type": "recall_at_1", "value": 35.778999999999996}, {"type": "recall_at_10", "value": 64.85300000000001}, {"type": "recall_at_100", "value": 83.98400000000001}, {"type": "recall_at_1000", "value": 94.18299999999999}, {"type": "recall_at_3", "value": 51.929}, {"type": "recall_at_5", "value": 57.666}, {"type": "map_at_1", "value": 21.719}, {"type": "map_at_10", "value": 29.326999999999998}, {"type": "map_at_100", "value": 30.314000000000004}, {"type": "map_at_1000", "value": 30.397000000000002}, {"type": "map_at_3", "value": 27.101}, {"type": "map_at_5", "value": 28.141}, {"type": "mrr_at_1", "value": 23.503}, {"type": "mrr_at_10", "value": 31.225}, {"type": "mrr_at_100", "value": 32.096000000000004}, {"type": "mrr_at_1000", "value": 32.159}, {"type": "mrr_at_3", "value": 29.076999999999998}, {"type": "mrr_at_5", "value": 30.083}, {"type": "ndcg_at_1", "value": 23.503}, {"type": "ndcg_at_10", "value": 33.842}, {"type": "ndcg_at_100", "value": 39.038000000000004}, {"type": "ndcg_at_1000", "value": 41.214}, {"type": "ndcg_at_3", "value": 29.347}, {"type": "ndcg_at_5", "value": 31.121}, {"type": "precision_at_1", "value": 23.503}, {"type": "precision_at_10", "value": 5.266}, {"type": "precision_at_100", "value": 0.831}, {"type": "precision_at_1000", "value": 0.106}, {"type": "precision_at_3", "value": 12.504999999999999}, {"type": "precision_at_5", "value": 8.565000000000001}, {"type": "recall_at_1", "value": 21.719}, {"type": "recall_at_10", "value": 46.024}, {"type": "recall_at_100", "value": 70.78999999999999}, {"type": "recall_at_1000", "value": 87.022}, {"type": "recall_at_3", "value": 33.64}, {"type": "recall_at_5", "value": 37.992}, {"type": "map_at_1", "value": 15.601}, {"type": "map_at_10", "value": 22.054000000000002}, {"type": "map_at_100", "value": 23.177}, {"type": "map_at_1000", "value": 23.308}, {"type": "map_at_3", "value": 19.772000000000002}, {"type": "map_at_5", "value": 21.055}, {"type": "mrr_at_1", "value": 19.403000000000002}, {"type": "mrr_at_10", "value": 26.409}, {"type": "mrr_at_100", "value": 27.356}, {"type": "mrr_at_1000", "value": 27.441}, {"type": "mrr_at_3", "value": 24.108999999999998}, {"type": "mrr_at_5", "value": 25.427}, {"type": "ndcg_at_1", "value": 19.403000000000002}, {"type": "ndcg_at_10", "value": 26.474999999999998}, {"type": "ndcg_at_100", "value": 32.086}, {"type": "ndcg_at_1000", "value": 35.231}, {"type": "ndcg_at_3", "value": 22.289}, {"type": "ndcg_at_5", "value": 24.271}, {"type": "precision_at_1", "value": 19.403000000000002}, {"type": "precision_at_10", "value": 4.813}, {"type": "precision_at_100", "value": 0.8869999999999999}, {"type": "precision_at_1000", "value": 0.13}, {"type": "precision_at_3", "value": 10.531}, {"type": "precision_at_5", "value": 7.710999999999999}, {"type": "recall_at_1", "value": 15.601}, {"type": "recall_at_10", "value": 35.916}, {"type": "recall_at_100", "value": 60.8}, {"type": "recall_at_1000", "value": 83.245}, {"type": "recall_at_3", "value": 24.321}, {"type": "recall_at_5", "value": 29.372999999999998}, {"type": "map_at_1", "value": 25.522}, {"type": "map_at_10", "value": 34.854}, {"type": "map_at_100", "value": 36.269}, {"type": "map_at_1000", "value": 36.387}, {"type": "map_at_3", "value": 32.187}, {"type": "map_at_5", "value": 33.692}, {"type": "mrr_at_1", "value": 31.375999999999998}, {"type": "mrr_at_10", "value": 40.471000000000004}, {"type": "mrr_at_100", "value": 41.481}, {"type": "mrr_at_1000", "value": 41.533}, {"type": "mrr_at_3", "value": 38.274}, {"type": "mrr_at_5", "value": 39.612}, {"type": "ndcg_at_1", "value": 31.375999999999998}, {"type": "ndcg_at_10", "value": 40.298}, {"type": "ndcg_at_100", "value": 46.255}, {"type": "ndcg_at_1000", "value": 48.522}, {"type": "ndcg_at_3", "value": 36.049}, {"type": "ndcg_at_5", "value": 38.095}, {"type": "precision_at_1", "value": 31.375999999999998}, {"type": "precision_at_10", "value": 7.305000000000001}, {"type": "precision_at_100", "value": 1.201}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 17.132}, {"type": "precision_at_5", "value": 12.107999999999999}, {"type": "recall_at_1", "value": 25.522}, {"type": "recall_at_10", "value": 50.988}, {"type": "recall_at_100", "value": 76.005}, {"type": "recall_at_1000", "value": 91.11200000000001}, {"type": "recall_at_3", "value": 38.808}, {"type": "recall_at_5", "value": 44.279}, {"type": "map_at_1", "value": 24.615000000000002}, {"type": "map_at_10", "value": 32.843}, {"type": "map_at_100", "value": 34.172999999999995}, {"type": "map_at_1000", "value": 34.286}, {"type": "map_at_3", "value": 30.125}, {"type": "map_at_5", "value": 31.495}, {"type": "mrr_at_1", "value": 30.023}, {"type": "mrr_at_10", "value": 38.106}, {"type": "mrr_at_100", "value": 39.01}, {"type": "mrr_at_1000", "value": 39.071}, {"type": "mrr_at_3", "value": 35.674}, {"type": "mrr_at_5", "value": 36.924}, {"type": "ndcg_at_1", "value": 30.023}, {"type": "ndcg_at_10", "value": 38.091}, {"type": "ndcg_at_100", "value": 43.771}, {"type": "ndcg_at_1000", "value": 46.315}, {"type": "ndcg_at_3", "value": 33.507}, {"type": "ndcg_at_5", "value": 35.304}, {"type": "precision_at_1", "value": 30.023}, {"type": "precision_at_10", "value": 6.837999999999999}, {"type": "precision_at_100", "value": 1.124}, {"type": "precision_at_1000", "value": 0.152}, {"type": "precision_at_3", "value": 15.562999999999999}, {"type": "precision_at_5", "value": 10.936}, {"type": "recall_at_1", "value": 24.615000000000002}, {"type": "recall_at_10", "value": 48.691}, {"type": "recall_at_100", "value": 72.884}, {"type": "recall_at_1000", "value": 90.387}, {"type": "recall_at_3", "value": 35.659}, {"type": "recall_at_5", "value": 40.602}, {"type": "map_at_1", "value": 23.223666666666666}, {"type": "map_at_10", "value": 31.338166666666673}, {"type": "map_at_100", "value": 32.47358333333333}, {"type": "map_at_1000", "value": 32.5955}, {"type": "map_at_3", "value": 28.84133333333333}, {"type": "map_at_5", "value": 30.20808333333333}, {"type": "mrr_at_1", "value": 27.62483333333333}, {"type": "mrr_at_10", "value": 35.385916666666674}, {"type": "mrr_at_100", "value": 36.23325}, {"type": "mrr_at_1000", "value": 36.29966666666667}, {"type": "mrr_at_3", "value": 33.16583333333333}, {"type": "mrr_at_5", "value": 34.41983333333334}, {"type": "ndcg_at_1", "value": 27.62483333333333}, {"type": "ndcg_at_10", "value": 36.222}, {"type": "ndcg_at_100", "value": 41.29491666666666}, {"type": "ndcg_at_1000", "value": 43.85508333333333}, {"type": "ndcg_at_3", "value": 31.95116666666667}, {"type": "ndcg_at_5", "value": 33.88541666666667}, {"type": "precision_at_1", "value": 27.62483333333333}, {"type": "precision_at_10", "value": 6.339916666666667}, {"type": "precision_at_100", "value": 1.0483333333333333}, {"type": "precision_at_1000", "value": 0.14608333333333334}, {"type": "precision_at_3", "value": 14.726500000000003}, {"type": "precision_at_5", "value": 10.395}, {"type": "recall_at_1", "value": 23.223666666666666}, {"type": "recall_at_10", "value": 46.778999999999996}, {"type": "recall_at_100", "value": 69.27141666666667}, {"type": "recall_at_1000", "value": 87.27383333333334}, {"type": "recall_at_3", "value": 34.678749999999994}, {"type": "recall_at_5", "value": 39.79900000000001}, {"type": "map_at_1", "value": 21.677}, {"type": "map_at_10", "value": 27.828000000000003}, {"type": "map_at_100", "value": 28.538999999999998}, {"type": "map_at_1000", "value": 28.64}, {"type": "map_at_3", "value": 26.105}, {"type": "map_at_5", "value": 27.009}, {"type": "mrr_at_1", "value": 24.387}, {"type": "mrr_at_10", "value": 30.209999999999997}, {"type": "mrr_at_100", "value": 30.953000000000003}, {"type": "mrr_at_1000", "value": 31.029}, {"type": "mrr_at_3", "value": 28.707}, {"type": "mrr_at_5", "value": 29.610999999999997}, {"type": "ndcg_at_1", "value": 24.387}, {"type": "ndcg_at_10", "value": 31.378}, {"type": "ndcg_at_100", "value": 35.249}, {"type": "ndcg_at_1000", "value": 37.923}, {"type": "ndcg_at_3", "value": 28.213}, {"type": "ndcg_at_5", "value": 29.658}, {"type": "precision_at_1", "value": 24.387}, {"type": "precision_at_10", "value": 4.8309999999999995}, {"type": "precision_at_100", "value": 0.73}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 12.168}, {"type": "precision_at_5", "value": 8.251999999999999}, {"type": "recall_at_1", "value": 21.677}, {"type": "recall_at_10", "value": 40.069}, {"type": "recall_at_100", "value": 58.077}, {"type": "recall_at_1000", "value": 77.97}, {"type": "recall_at_3", "value": 31.03}, {"type": "recall_at_5", "value": 34.838}, {"type": "map_at_1", "value": 14.484}, {"type": "map_at_10", "value": 20.355}, {"type": "map_at_100", "value": 21.382}, {"type": "map_at_1000", "value": 21.511}, {"type": "map_at_3", "value": 18.448}, {"type": "map_at_5", "value": 19.451999999999998}, {"type": "mrr_at_1", "value": 17.584}, {"type": "mrr_at_10", "value": 23.825}, {"type": "mrr_at_100", "value": 24.704}, {"type": "mrr_at_1000", "value": 24.793000000000003}, {"type": "mrr_at_3", "value": 21.92}, {"type": "mrr_at_5", "value": 22.97}, {"type": "ndcg_at_1", "value": 17.584}, {"type": "ndcg_at_10", "value": 24.315}, {"type": "ndcg_at_100", "value": 29.354999999999997}, {"type": "ndcg_at_1000", "value": 32.641999999999996}, {"type": "ndcg_at_3", "value": 20.802}, {"type": "ndcg_at_5", "value": 22.335}, {"type": "precision_at_1", "value": 17.584}, {"type": "precision_at_10", "value": 4.443}, {"type": "precision_at_100", "value": 0.8160000000000001}, {"type": "precision_at_1000", "value": 0.128}, {"type": "precision_at_3", "value": 9.807}, {"type": "precision_at_5", "value": 7.0889999999999995}, {"type": "recall_at_1", "value": 14.484}, {"type": "recall_at_10", "value": 32.804}, {"type": "recall_at_100", "value": 55.679}, {"type": "recall_at_1000", "value": 79.63}, {"type": "recall_at_3", "value": 22.976}, {"type": "recall_at_5", "value": 26.939}, {"type": "map_at_1", "value": 22.983999999999998}, {"type": "map_at_10", "value": 30.812}, {"type": "map_at_100", "value": 31.938}, {"type": "map_at_1000", "value": 32.056000000000004}, {"type": "map_at_3", "value": 28.449999999999996}, {"type": "map_at_5", "value": 29.542}, {"type": "mrr_at_1", "value": 27.145999999999997}, {"type": "mrr_at_10", "value": 34.782999999999994}, {"type": "mrr_at_100", "value": 35.699}, {"type": "mrr_at_1000", "value": 35.768}, {"type": "mrr_at_3", "value": 32.572}, {"type": "mrr_at_5", "value": 33.607}, {"type": "ndcg_at_1", "value": 27.145999999999997}, {"type": "ndcg_at_10", "value": 35.722}, {"type": "ndcg_at_100", "value": 40.964}, {"type": "ndcg_at_1000", "value": 43.598}, {"type": "ndcg_at_3", "value": 31.379}, {"type": "ndcg_at_5", "value": 32.924}, {"type": "precision_at_1", "value": 27.145999999999997}, {"type": "precision_at_10", "value": 6.063000000000001}, {"type": "precision_at_100", "value": 0.9730000000000001}, {"type": "precision_at_1000", "value": 0.13}, {"type": "precision_at_3", "value": 14.366000000000001}, {"type": "precision_at_5", "value": 9.776}, {"type": "recall_at_1", "value": 22.983999999999998}, {"type": "recall_at_10", "value": 46.876}, {"type": "recall_at_100", "value": 69.646}, {"type": "recall_at_1000", "value": 88.305}, {"type": "recall_at_3", "value": 34.471000000000004}, {"type": "recall_at_5", "value": 38.76}, {"type": "map_at_1", "value": 23.017000000000003}, {"type": "map_at_10", "value": 31.049}, {"type": "map_at_100", "value": 32.582}, {"type": "map_at_1000", "value": 32.817}, {"type": "map_at_3", "value": 28.303}, {"type": "map_at_5", "value": 29.854000000000003}, {"type": "mrr_at_1", "value": 27.866000000000003}, {"type": "mrr_at_10", "value": 35.56}, {"type": "mrr_at_100", "value": 36.453}, {"type": "mrr_at_1000", "value": 36.519}, {"type": "mrr_at_3", "value": 32.938}, {"type": "mrr_at_5", "value": 34.391}, {"type": "ndcg_at_1", "value": 27.866000000000003}, {"type": "ndcg_at_10", "value": 36.506}, {"type": "ndcg_at_100", "value": 42.344}, {"type": "ndcg_at_1000", "value": 45.213}, {"type": "ndcg_at_3", "value": 31.805}, {"type": "ndcg_at_5", "value": 33.933}, {"type": "precision_at_1", "value": 27.866000000000003}, {"type": "precision_at_10", "value": 7.016}, {"type": "precision_at_100", "value": 1.468}, {"type": "precision_at_1000", "value": 0.23900000000000002}, {"type": "precision_at_3", "value": 14.822}, {"type": "precision_at_5", "value": 10.791}, {"type": "recall_at_1", "value": 23.017000000000003}, {"type": "recall_at_10", "value": 47.053}, {"type": "recall_at_100", "value": 73.177}, {"type": "recall_at_1000", "value": 91.47800000000001}, {"type": "recall_at_3", "value": 33.675}, {"type": "recall_at_5", "value": 39.36}, {"type": "map_at_1", "value": 16.673}, {"type": "map_at_10", "value": 24.051000000000002}, {"type": "map_at_100", "value": 24.933}, {"type": "map_at_1000", "value": 25.06}, {"type": "map_at_3", "value": 21.446}, {"type": "map_at_5", "value": 23.064}, {"type": "mrr_at_1", "value": 18.115000000000002}, {"type": "mrr_at_10", "value": 25.927}, {"type": "mrr_at_100", "value": 26.718999999999998}, {"type": "mrr_at_1000", "value": 26.817999999999998}, {"type": "mrr_at_3", "value": 23.383000000000003}, {"type": "mrr_at_5", "value": 25.008999999999997}, {"type": "ndcg_at_1", "value": 18.115000000000002}, {"type": "ndcg_at_10", "value": 28.669}, {"type": "ndcg_at_100", "value": 33.282000000000004}, {"type": "ndcg_at_1000", "value": 36.481}, {"type": "ndcg_at_3", "value": 23.574}, {"type": "ndcg_at_5", "value": 26.340000000000003}, {"type": "precision_at_1", "value": 18.115000000000002}, {"type": "precision_at_10", "value": 4.769}, {"type": "precision_at_100", "value": 0.767}, {"type": "precision_at_1000", "value": 0.116}, {"type": "precision_at_3", "value": 10.351}, {"type": "precision_at_5", "value": 7.8}, {"type": "recall_at_1", "value": 16.673}, {"type": "recall_at_10", "value": 41.063}, {"type": "recall_at_100", "value": 62.851}, {"type": "recall_at_1000", "value": 86.701}, {"type": "recall_at_3", "value": 27.532}, {"type": "recall_at_5", "value": 34.076}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 8.752}, {"type": "map_at_10", "value": 15.120000000000001}, {"type": "map_at_100", "value": 16.678}, {"type": "map_at_1000", "value": 16.854}, {"type": "map_at_3", "value": 12.603}, {"type": "map_at_5", "value": 13.918}, {"type": "mrr_at_1", "value": 19.283}, {"type": "mrr_at_10", "value": 29.145}, {"type": "mrr_at_100", "value": 30.281000000000002}, {"type": "mrr_at_1000", "value": 30.339}, {"type": "mrr_at_3", "value": 26.069}, {"type": "mrr_at_5", "value": 27.864}, {"type": "ndcg_at_1", "value": 19.283}, {"type": "ndcg_at_10", "value": 21.804000000000002}, {"type": "ndcg_at_100", "value": 28.576}, {"type": "ndcg_at_1000", "value": 32.063}, {"type": "ndcg_at_3", "value": 17.511}, {"type": "ndcg_at_5", "value": 19.112000000000002}, {"type": "precision_at_1", "value": 19.283}, {"type": "precision_at_10", "value": 6.873}, {"type": "precision_at_100", "value": 1.405}, {"type": "precision_at_1000", "value": 0.20500000000000002}, {"type": "precision_at_3", "value": 13.16}, {"type": "precision_at_5", "value": 10.189}, {"type": "recall_at_1", "value": 8.752}, {"type": "recall_at_10", "value": 27.004}, {"type": "recall_at_100", "value": 50.648}, {"type": "recall_at_1000", "value": 70.458}, {"type": "recall_at_3", "value": 16.461000000000002}, {"type": "recall_at_5", "value": 20.973}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.81}, {"type": "map_at_10", "value": 14.056}, {"type": "map_at_100", "value": 18.961}, {"type": "map_at_1000", "value": 20.169}, {"type": "map_at_3", "value": 10.496}, {"type": "map_at_5", "value": 11.952}, {"type": "mrr_at_1", "value": 53.5}, {"type": "mrr_at_10", "value": 63.479}, {"type": "mrr_at_100", "value": 63.971999999999994}, {"type": "mrr_at_1000", "value": 63.993}, {"type": "mrr_at_3", "value": 61.541999999999994}, {"type": "mrr_at_5", "value": 62.778999999999996}, {"type": "ndcg_at_1", "value": 42.25}, {"type": "ndcg_at_10", "value": 31.471}, {"type": "ndcg_at_100", "value": 35.115}, {"type": "ndcg_at_1000", "value": 42.408}, {"type": "ndcg_at_3", "value": 35.458}, {"type": "ndcg_at_5", "value": 32.973}, {"type": "precision_at_1", "value": 53.5}, {"type": "precision_at_10", "value": 24.85}, {"type": "precision_at_100", "value": 7.79}, {"type": "precision_at_1000", "value": 1.599}, {"type": "precision_at_3", "value": 38.667}, {"type": "precision_at_5", "value": 31.55}, {"type": "recall_at_1", "value": 6.81}, {"type": "recall_at_10", "value": 19.344}, {"type": "recall_at_100", "value": 40.837}, {"type": "recall_at_1000", "value": 64.661}, {"type": "recall_at_3", "value": 11.942}, {"type": "recall_at_5", "value": 14.646}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 44.64499999999999}, {"type": "f1", "value": 39.39106911352714}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 48.196}, {"type": "map_at_10", "value": 61.404}, {"type": "map_at_100", "value": 61.846000000000004}, {"type": "map_at_1000", "value": 61.866}, {"type": "map_at_3", "value": 58.975}, {"type": "map_at_5", "value": 60.525}, {"type": "mrr_at_1", "value": 52.025}, {"type": "mrr_at_10", "value": 65.43299999999999}, {"type": "mrr_at_100", "value": 65.80799999999999}, {"type": "mrr_at_1000", "value": 65.818}, {"type": "mrr_at_3", "value": 63.146}, {"type": "mrr_at_5", "value": 64.64}, {"type": "ndcg_at_1", "value": 52.025}, {"type": "ndcg_at_10", "value": 67.889}, {"type": "ndcg_at_100", "value": 69.864}, {"type": "ndcg_at_1000", "value": 70.337}, {"type": "ndcg_at_3", "value": 63.315}, {"type": "ndcg_at_5", "value": 65.91799999999999}, {"type": "precision_at_1", "value": 52.025}, {"type": "precision_at_10", "value": 9.182}, {"type": "precision_at_100", "value": 1.027}, {"type": "precision_at_1000", "value": 0.108}, {"type": "precision_at_3", "value": 25.968000000000004}, {"type": "precision_at_5", "value": 17.006}, {"type": "recall_at_1", "value": 48.196}, {"type": "recall_at_10", "value": 83.885}, {"type": "recall_at_100", "value": 92.671}, {"type": "recall_at_1000", "value": 96.018}, {"type": "recall_at_3", "value": 71.59}, {"type": "recall_at_5", "value": 77.946}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 15.193000000000001}, {"type": "map_at_10", "value": 25.168000000000003}, {"type": "map_at_100", "value": 27.017000000000003}, {"type": "map_at_1000", "value": 27.205000000000002}, {"type": "map_at_3", "value": 21.746}, {"type": "map_at_5", "value": 23.579}, {"type": "mrr_at_1", "value": 31.635999999999996}, {"type": "mrr_at_10", "value": 40.077}, {"type": "mrr_at_100", "value": 41.112}, {"type": "mrr_at_1000", "value": 41.160999999999994}, {"type": "mrr_at_3", "value": 37.937}, {"type": "mrr_at_5", "value": 39.18}, {"type": "ndcg_at_1", "value": 31.635999999999996}, {"type": "ndcg_at_10", "value": 32.298}, {"type": "ndcg_at_100", "value": 39.546}, {"type": "ndcg_at_1000", "value": 42.88}, {"type": "ndcg_at_3", "value": 29.221999999999998}, {"type": "ndcg_at_5", "value": 30.069000000000003}, {"type": "precision_at_1", "value": 31.635999999999996}, {"type": "precision_at_10", "value": 9.367}, {"type": "precision_at_100", "value": 1.645}, {"type": "precision_at_1000", "value": 0.22399999999999998}, {"type": "precision_at_3", "value": 20.01}, {"type": "precision_at_5", "value": 14.753}, {"type": "recall_at_1", "value": 15.193000000000001}, {"type": "recall_at_10", "value": 38.214999999999996}, {"type": "recall_at_100", "value": 65.95}, {"type": "recall_at_1000", "value": 85.85300000000001}, {"type": "recall_at_3", "value": 26.357000000000003}, {"type": "recall_at_5", "value": 31.319999999999997}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB GerDaLIR", "type": "jinaai/ger_da_lir", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 10.363}, {"type": "map_at_10", "value": 16.222}, {"type": "map_at_100", "value": 17.28}, {"type": "map_at_1000", "value": 17.380000000000003}, {"type": "map_at_3", "value": 14.054}, {"type": "map_at_5", "value": 15.203}, {"type": "mrr_at_1", "value": 11.644}, {"type": "mrr_at_10", "value": 17.625}, {"type": "mrr_at_100", "value": 18.608}, {"type": "mrr_at_1000", "value": 18.695999999999998}, {"type": "mrr_at_3", "value": 15.481}, {"type": "mrr_at_5", "value": 16.659}, {"type": "ndcg_at_1", "value": 11.628}, {"type": "ndcg_at_10", "value": 20.028000000000002}, {"type": "ndcg_at_100", "value": 25.505}, {"type": "ndcg_at_1000", "value": 28.288000000000004}, {"type": "ndcg_at_3", "value": 15.603}, {"type": "ndcg_at_5", "value": 17.642}, {"type": "precision_at_1", "value": 11.628}, {"type": "precision_at_10", "value": 3.5589999999999997}, {"type": "precision_at_100", "value": 0.664}, {"type": "precision_at_1000", "value": 0.092}, {"type": "precision_at_3", "value": 7.109999999999999}, {"type": "precision_at_5", "value": 5.401}, {"type": "recall_at_1", "value": 10.363}, {"type": "recall_at_10", "value": 30.586000000000002}, {"type": "recall_at_100", "value": 56.43}, {"type": "recall_at_1000", "value": 78.142}, {"type": "recall_at_3", "value": 18.651}, {"type": "recall_at_5", "value": 23.493}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB GermanDPR", "type": "deepset/germandpr", "config": "default", "split": "test", "revision": "5129d02422a66be600ac89cd3e8531b4f97d347d"}, "metrics": [{"type": "map_at_1", "value": 60.78}, {"type": "map_at_10", "value": 73.91499999999999}, {"type": "map_at_100", "value": 74.089}, {"type": "map_at_1000", "value": 74.09400000000001}, {"type": "map_at_3", "value": 71.87}, {"type": "map_at_5", "value": 73.37700000000001}, {"type": "mrr_at_1", "value": 60.78}, {"type": "mrr_at_10", "value": 73.91499999999999}, {"type": "mrr_at_100", "value": 74.089}, {"type": "mrr_at_1000", "value": 74.09400000000001}, {"type": "mrr_at_3", "value": 71.87}, {"type": "mrr_at_5", "value": 73.37700000000001}, {"type": "ndcg_at_1", "value": 60.78}, {"type": "ndcg_at_10", "value": 79.35600000000001}, {"type": "ndcg_at_100", "value": 80.077}, {"type": "ndcg_at_1000", "value": 80.203}, {"type": "ndcg_at_3", "value": 75.393}, {"type": "ndcg_at_5", "value": 78.077}, {"type": "precision_at_1", "value": 60.78}, {"type": "precision_at_10", "value": 9.59}, {"type": "precision_at_100", "value": 0.9900000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 28.52}, {"type": "precision_at_5", "value": 18.4}, {"type": "recall_at_1", "value": 60.78}, {"type": "recall_at_10", "value": 95.902}, {"type": "recall_at_100", "value": 99.024}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 85.56099999999999}, {"type": "recall_at_5", "value": 92.0}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB GermanSTSBenchmark", "type": "jinaai/german-STSbenchmark", "config": "default", "split": "test", "revision": "49d9b423b996fea62b483f9ee6dfb5ec233515ca"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.49524420894356}, {"type": "cos_sim_spearman", "value": 88.32407839427714}, {"type": "euclidean_pearson", "value": 87.25098779877104}, {"type": "euclidean_spearman", "value": 88.22738098593608}, {"type": "manhattan_pearson", "value": 87.23872691839607}, {"type": "manhattan_spearman", "value": 88.2002968380165}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.81}, {"type": "map_at_10", "value": 46.238}, {"type": "map_at_100", "value": 47.141}, {"type": "map_at_1000", "value": 47.213}, {"type": "map_at_3", "value": 43.248999999999995}, {"type": "map_at_5", "value": 45.078}, {"type": "mrr_at_1", "value": 63.619}, {"type": "mrr_at_10", "value": 71.279}, {"type": "mrr_at_100", "value": 71.648}, {"type": "mrr_at_1000", "value": 71.665}, {"type": "mrr_at_3", "value": 69.76599999999999}, {"type": "mrr_at_5", "value": 70.743}, {"type": "ndcg_at_1", "value": 63.619}, {"type": "ndcg_at_10", "value": 55.38999999999999}, {"type": "ndcg_at_100", "value": 58.80800000000001}, {"type": "ndcg_at_1000", "value": 60.331999999999994}, {"type": "ndcg_at_3", "value": 50.727}, {"type": "ndcg_at_5", "value": 53.284}, {"type": "precision_at_1", "value": 63.619}, {"type": "precision_at_10", "value": 11.668000000000001}, {"type": "precision_at_100", "value": 1.434}, {"type": "precision_at_1000", "value": 0.164}, {"type": "precision_at_3", "value": 32.001000000000005}, {"type": "precision_at_5", "value": 21.223}, {"type": "recall_at_1", "value": 31.81}, {"type": "recall_at_10", "value": 58.339}, {"type": "recall_at_100", "value": 71.708}, {"type": "recall_at_1000", "value": 81.85}, {"type": "recall_at_3", "value": 48.001}, {"type": "recall_at_5", "value": 53.059}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 68.60640000000001}, {"type": "ap", "value": 62.84296904042086}, {"type": "f1", "value": 68.50643633327537}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MIRACL", "type": "jinaai/miracl", "config": "default", "split": "test", "revision": "8741c3b61cd36ed9ca1b3d4203543a41793239e2"}, "metrics": [{"type": "map", "value": 64.29704335389768}, {"type": "mrr", "value": 72.11962197159565}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 89.3844049247606}, {"type": "f1", "value": 89.2124328528015}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (de)", "type": "mteb/mtop_domain", "config": "de", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 88.36855452240067}, {"type": "f1", "value": 87.35458822097442}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 66.48654810761514}, {"type": "f1", "value": 50.07229882504409}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (de)", "type": "mteb/mtop_intent", "config": "de", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 63.832065370526905}, {"type": "f1", "value": 46.283579383385806}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (de)", "type": "mteb/amazon_massive_intent", "config": "de", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 63.89038332212509}, {"type": "f1", "value": 61.86279849685129}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.11230665770006}, {"type": "f1", "value": 67.44780095350535}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (de)", "type": "mteb/amazon_massive_scenario", "config": "de", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.25084061869536}, {"type": "f1", "value": 71.43965023016408}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 73.73907195696032}, {"type": "f1", "value": 73.69920814839061}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 31.32577306498249}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 28.759349326367783}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 30.401342674703425}, {"type": "mrr", "value": 31.384379585660987}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.855}, {"type": "map_at_10", "value": 10.01}, {"type": "map_at_100", "value": 12.461}, {"type": "map_at_1000", "value": 13.776}, {"type": "map_at_3", "value": 7.252}, {"type": "map_at_5", "value": 8.679}, {"type": "mrr_at_1", "value": 41.176}, {"type": "mrr_at_10", "value": 49.323}, {"type": "mrr_at_100", "value": 49.954}, {"type": "mrr_at_1000", "value": 49.997}, {"type": "mrr_at_3", "value": 46.904}, {"type": "mrr_at_5", "value": 48.375}, {"type": "ndcg_at_1", "value": 39.318999999999996}, {"type": "ndcg_at_10", "value": 28.607}, {"type": "ndcg_at_100", "value": 26.554}, {"type": "ndcg_at_1000", "value": 35.731}, {"type": "ndcg_at_3", "value": 32.897999999999996}, {"type": "ndcg_at_5", "value": 31.53}, {"type": "precision_at_1", "value": 41.176}, {"type": "precision_at_10", "value": 20.867}, {"type": "precision_at_100", "value": 6.796}, {"type": "precision_at_1000", "value": 1.983}, {"type": "precision_at_3", "value": 30.547}, {"type": "precision_at_5", "value": 27.245}, {"type": "recall_at_1", "value": 4.855}, {"type": "recall_at_10", "value": 14.08}, {"type": "recall_at_100", "value": 28.188000000000002}, {"type": "recall_at_1000", "value": 60.07900000000001}, {"type": "recall_at_3", "value": 7.947}, {"type": "recall_at_5", "value": 10.786}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 26.906999999999996}, {"type": "map_at_10", "value": 41.147}, {"type": "map_at_100", "value": 42.269}, {"type": "map_at_1000", "value": 42.308}, {"type": "map_at_3", "value": 36.638999999999996}, {"type": "map_at_5", "value": 39.285}, {"type": "mrr_at_1", "value": 30.359}, {"type": "mrr_at_10", "value": 43.607}, {"type": "mrr_at_100", "value": 44.454}, {"type": "mrr_at_1000", "value": 44.481}, {"type": "mrr_at_3", "value": 39.644}, {"type": "mrr_at_5", "value": 42.061}, {"type": "ndcg_at_1", "value": 30.330000000000002}, {"type": "ndcg_at_10", "value": 48.899}, {"type": "ndcg_at_100", "value": 53.612}, {"type": "ndcg_at_1000", "value": 54.51200000000001}, {"type": "ndcg_at_3", "value": 40.262}, {"type": "ndcg_at_5", "value": 44.787}, {"type": "precision_at_1", "value": 30.330000000000002}, {"type": "precision_at_10", "value": 8.323}, {"type": "precision_at_100", "value": 1.0959999999999999}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 18.395}, {"type": "precision_at_5", "value": 13.627}, {"type": "recall_at_1", "value": 26.906999999999996}, {"type": "recall_at_10", "value": 70.215}, {"type": "recall_at_100", "value": 90.61200000000001}, {"type": "recall_at_1000", "value": 97.294}, {"type": "recall_at_3", "value": 47.784}, {"type": "recall_at_5", "value": 58.251}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB PawsX", "type": "paws-x", "config": "default", "split": "test", "revision": "8a04d940a42cd40658986fdd8e3da561533a3646"}, "metrics": [{"type": "cos_sim_accuracy", "value": 60.5}, {"type": "cos_sim_ap", "value": 57.606096528877494}, {"type": "cos_sim_f1", "value": 62.24240307369892}, {"type": "cos_sim_precision", "value": 45.27439024390244}, {"type": "cos_sim_recall", "value": 99.55307262569832}, {"type": "dot_accuracy", "value": 57.699999999999996}, {"type": "dot_ap", "value": 51.289351057160616}, {"type": "dot_f1", "value": 62.25953130465197}, {"type": "dot_precision", "value": 45.31568228105906}, {"type": "dot_recall", "value": 99.4413407821229}, {"type": "euclidean_accuracy", "value": 60.45}, {"type": "euclidean_ap", "value": 57.616461421424034}, {"type": "euclidean_f1", "value": 62.313697657913416}, {"type": "euclidean_precision", "value": 45.657826313052524}, {"type": "euclidean_recall", "value": 98.10055865921787}, {"type": "manhattan_accuracy", "value": 60.3}, {"type": "manhattan_ap", "value": 57.580565271667325}, {"type": "manhattan_f1", "value": 62.24240307369892}, {"type": "manhattan_precision", "value": 45.27439024390244}, {"type": "manhattan_recall", "value": 99.55307262569832}, {"type": "max_accuracy", "value": 60.5}, {"type": "max_ap", "value": 57.616461421424034}, {"type": "max_f1", "value": 62.313697657913416}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.21300000000001}, {"type": "map_at_10", "value": 84.136}, {"type": "map_at_100", "value": 84.796}, {"type": "map_at_1000", "value": 84.812}, {"type": "map_at_3", "value": 81.182}, {"type": "map_at_5", "value": 83.027}, {"type": "mrr_at_1", "value": 80.91000000000001}, {"type": "mrr_at_10", "value": 87.155}, {"type": "mrr_at_100", "value": 87.27000000000001}, {"type": "mrr_at_1000", "value": 87.271}, {"type": "mrr_at_3", "value": 86.158}, {"type": "mrr_at_5", "value": 86.828}, {"type": "ndcg_at_1", "value": 80.88}, {"type": "ndcg_at_10", "value": 87.926}, {"type": "ndcg_at_100", "value": 89.223}, {"type": "ndcg_at_1000", "value": 89.321}, {"type": "ndcg_at_3", "value": 85.036}, {"type": "ndcg_at_5", "value": 86.614}, {"type": "precision_at_1", "value": 80.88}, {"type": "precision_at_10", "value": 13.350000000000001}, {"type": "precision_at_100", "value": 1.5310000000000001}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.173}, {"type": "precision_at_5", "value": 24.476}, {"type": "recall_at_1", "value": 70.21300000000001}, {"type": "recall_at_10", "value": 95.12}, {"type": "recall_at_100", "value": 99.535}, {"type": "recall_at_1000", "value": 99.977}, {"type": "recall_at_3", "value": 86.833}, {"type": "recall_at_5", "value": 91.26100000000001}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 47.754688783184875}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 54.875736374329364}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 3.773}, {"type": "map_at_10", "value": 9.447}, {"type": "map_at_100", "value": 11.1}, {"type": "map_at_1000", "value": 11.37}, {"type": "map_at_3", "value": 6.787}, {"type": "map_at_5", "value": 8.077}, {"type": "mrr_at_1", "value": 18.5}, {"type": "mrr_at_10", "value": 28.227000000000004}, {"type": "mrr_at_100", "value": 29.445}, {"type": "mrr_at_1000", "value": 29.515}, {"type": "mrr_at_3", "value": 25.2}, {"type": "mrr_at_5", "value": 27.055}, {"type": "ndcg_at_1", "value": 18.5}, {"type": "ndcg_at_10", "value": 16.29}, {"type": "ndcg_at_100", "value": 23.250999999999998}, {"type": "ndcg_at_1000", "value": 28.445999999999998}, {"type": "ndcg_at_3", "value": 15.376000000000001}, {"type": "ndcg_at_5", "value": 13.528}, {"type": "precision_at_1", "value": 18.5}, {"type": "precision_at_10", "value": 8.51}, {"type": "precision_at_100", "value": 1.855}, {"type": "precision_at_1000", "value": 0.311}, {"type": "precision_at_3", "value": 14.533}, {"type": "precision_at_5", "value": 12.0}, {"type": "recall_at_1", "value": 3.773}, {"type": "recall_at_10", "value": 17.282}, {"type": "recall_at_100", "value": 37.645}, {"type": "recall_at_1000", "value": 63.138000000000005}, {"type": "recall_at_3", "value": 8.853}, {"type": "recall_at_5", "value": 12.168}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.32789517976525}, {"type": "cos_sim_spearman", "value": 80.32750384145629}, {"type": "euclidean_pearson", "value": 81.5025131452508}, {"type": "euclidean_spearman", "value": 80.24797115147175}, {"type": "manhattan_pearson", "value": 81.51634463412002}, {"type": "manhattan_spearman", "value": 80.24614721495055}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.47050448992432}, {"type": "cos_sim_spearman", "value": 80.58919997743621}, {"type": "euclidean_pearson", "value": 85.83258918113664}, {"type": "euclidean_spearman", "value": 80.97441389240902}, {"type": "manhattan_pearson", "value": 85.7798262013878}, {"type": "manhattan_spearman", "value": 80.97208703064196}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.95341439711532}, {"type": "cos_sim_spearman", "value": 86.59127484634989}, {"type": "euclidean_pearson", "value": 85.57850603454227}, {"type": "euclidean_spearman", "value": 86.47130477363419}, {"type": "manhattan_pearson", "value": 85.59387925447652}, {"type": "manhattan_spearman", "value": 86.50665427391583}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.39810909161844}, {"type": "cos_sim_spearman", "value": 82.98595295546008}, {"type": "euclidean_pearson", "value": 84.04681129969951}, {"type": "euclidean_spearman", "value": 82.98197460689866}, {"type": "manhattan_pearson", "value": 83.9918798171185}, {"type": "manhattan_spearman", "value": 82.91148131768082}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.02072712147692}, {"type": "cos_sim_spearman", "value": 88.78821332623012}, {"type": "euclidean_pearson", "value": 88.12132045572747}, {"type": "euclidean_spearman", "value": 88.74273451067364}, {"type": "manhattan_pearson", "value": 88.05431550059166}, {"type": "manhattan_spearman", "value": 88.67610233020723}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.96134704624787}, {"type": "cos_sim_spearman", "value": 84.44062976314666}, {"type": "euclidean_pearson", "value": 84.03642536310323}, {"type": "euclidean_spearman", "value": 84.4535014579785}, {"type": "manhattan_pearson", "value": 83.92874228901483}, {"type": "manhattan_spearman", "value": 84.33634314951631}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-de)", "type": "mteb/sts17-crosslingual-sts", "config": "en-de", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.3154168064887}, {"type": "cos_sim_spearman", "value": 86.72393652571682}, {"type": "euclidean_pearson", "value": 86.04193246174164}, {"type": "euclidean_spearman", "value": 86.30482896608093}, {"type": "manhattan_pearson", "value": 85.95524084651859}, {"type": "manhattan_spearman", "value": 86.06031431994282}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.91079682750804}, {"type": "cos_sim_spearman", "value": 89.30961836617064}, {"type": "euclidean_pearson", "value": 88.86249564158628}, {"type": "euclidean_spearman", "value": 89.04772899592396}, {"type": "manhattan_pearson", "value": 88.85579791315043}, {"type": "manhattan_spearman", "value": 88.94190462541333}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 67.00558145551088}, {"type": "cos_sim_spearman", "value": 67.96601170393878}, {"type": "euclidean_pearson", "value": 67.87627043214336}, {"type": "euclidean_spearman", "value": 66.76402572303859}, {"type": "manhattan_pearson", "value": 67.88306560555452}, {"type": "manhattan_spearman", "value": 66.6273862035506}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de)", "type": "mteb/sts22-crosslingual-sts", "config": "de", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 50.83759332748726}, {"type": "cos_sim_spearman", "value": 59.066344562858006}, {"type": "euclidean_pearson", "value": 50.08955848154131}, {"type": "euclidean_spearman", "value": 58.36517305855221}, {"type": "manhattan_pearson", "value": 50.05257267223111}, {"type": "manhattan_spearman", "value": 58.37570252804986}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-en)", "type": "mteb/sts22-crosslingual-sts", "config": "de-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 59.22749007956492}, {"type": "cos_sim_spearman", "value": 55.97282077657827}, {"type": "euclidean_pearson", "value": 62.10661533695752}, {"type": "euclidean_spearman", "value": 53.62780854854067}, {"type": "manhattan_pearson", "value": 62.37138085709719}, {"type": "manhattan_spearman", "value": 54.17556356828155}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.91145397065878}, {"type": "cos_sim_spearman", "value": 88.13960018389005}, {"type": "euclidean_pearson", "value": 87.67618876224006}, {"type": "euclidean_spearman", "value": 87.99119480810556}, {"type": "manhattan_pearson", "value": 87.67920297334753}, {"type": "manhattan_spearman", "value": 87.99113250064492}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 78.09133563707582}, {"type": "mrr", "value": 93.2415288052543}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 47.760999999999996}, {"type": "map_at_10", "value": 56.424}, {"type": "map_at_100", "value": 57.24399999999999}, {"type": "map_at_1000", "value": 57.278}, {"type": "map_at_3", "value": 53.68000000000001}, {"type": "map_at_5", "value": 55.442}, {"type": "mrr_at_1", "value": 50.666999999999994}, {"type": "mrr_at_10", "value": 58.012}, {"type": "mrr_at_100", "value": 58.736}, {"type": "mrr_at_1000", "value": 58.769000000000005}, {"type": "mrr_at_3", "value": 56.056}, {"type": "mrr_at_5", "value": 57.321999999999996}, {"type": "ndcg_at_1", "value": 50.666999999999994}, {"type": "ndcg_at_10", "value": 60.67700000000001}, {"type": "ndcg_at_100", "value": 64.513}, {"type": "ndcg_at_1000", "value": 65.62400000000001}, {"type": "ndcg_at_3", "value": 56.186}, {"type": "ndcg_at_5", "value": 58.692}, {"type": "precision_at_1", "value": 50.666999999999994}, {"type": "precision_at_10", "value": 8.200000000000001}, {"type": "precision_at_100", "value": 1.023}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 21.889}, {"type": "precision_at_5", "value": 14.866999999999999}, {"type": "recall_at_1", "value": 47.760999999999996}, {"type": "recall_at_10", "value": 72.006}, {"type": "recall_at_100", "value": 89.767}, {"type": "recall_at_1000", "value": 98.833}, {"type": "recall_at_3", "value": 60.211000000000006}, {"type": "recall_at_5", "value": 66.3}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.79009900990098}, {"type": "cos_sim_ap", "value": 94.86690691995835}, {"type": "cos_sim_f1", "value": 89.37875751503007}, {"type": "cos_sim_precision", "value": 89.5582329317269}, {"type": "cos_sim_recall", "value": 89.2}, {"type": "dot_accuracy", "value": 99.76336633663367}, {"type": "dot_ap", "value": 94.26453740761586}, {"type": "dot_f1", "value": 88.00783162016641}, {"type": "dot_precision", "value": 86.19367209971237}, {"type": "dot_recall", "value": 89.9}, {"type": "euclidean_accuracy", "value": 99.7940594059406}, {"type": "euclidean_ap", "value": 94.85459757524379}, {"type": "euclidean_f1", "value": 89.62779156327544}, {"type": "euclidean_precision", "value": 88.96551724137932}, {"type": "euclidean_recall", "value": 90.3}, {"type": "manhattan_accuracy", "value": 99.79009900990098}, {"type": "manhattan_ap", "value": 94.76971336654465}, {"type": "manhattan_f1", "value": 89.35323383084577}, {"type": "manhattan_precision", "value": 88.91089108910892}, {"type": "manhattan_recall", "value": 89.8}, {"type": "max_accuracy", "value": 99.7940594059406}, {"type": "max_ap", "value": 94.86690691995835}, {"type": "max_f1", "value": 89.62779156327544}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 55.38197670064987}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 33.08330158937971}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 49.50367079063226}, {"type": "mrr", "value": 50.30444943128768}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.37739520909561}, {"type": "cos_sim_spearman", "value": 31.548500943973913}, {"type": "dot_pearson", "value": 29.983610104303}, {"type": "dot_spearman", "value": 29.90185869098618}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.198}, {"type": "map_at_10", "value": 1.5810000000000002}, {"type": "map_at_100", "value": 9.064}, {"type": "map_at_1000", "value": 22.161}, {"type": "map_at_3", "value": 0.536}, {"type": "map_at_5", "value": 0.8370000000000001}, {"type": "mrr_at_1", "value": 80.0}, {"type": "mrr_at_10", "value": 86.75}, {"type": "mrr_at_100", "value": 86.799}, {"type": "mrr_at_1000", "value": 86.799}, {"type": "mrr_at_3", "value": 85.0}, {"type": "mrr_at_5", "value": 86.5}, {"type": "ndcg_at_1", "value": 73.0}, {"type": "ndcg_at_10", "value": 65.122}, {"type": "ndcg_at_100", "value": 51.853}, {"type": "ndcg_at_1000", "value": 47.275}, {"type": "ndcg_at_3", "value": 66.274}, {"type": "ndcg_at_5", "value": 64.826}, {"type": "precision_at_1", "value": 80.0}, {"type": "precision_at_10", "value": 70.19999999999999}, {"type": "precision_at_100", "value": 53.480000000000004}, {"type": "precision_at_1000", "value": 20.946}, {"type": "precision_at_3", "value": 71.333}, {"type": "precision_at_5", "value": 70.0}, {"type": "recall_at_1", "value": 0.198}, {"type": "recall_at_10", "value": 1.884}, {"type": "recall_at_100", "value": 12.57}, {"type": "recall_at_1000", "value": 44.208999999999996}, {"type": "recall_at_3", "value": 0.5890000000000001}, {"type": "recall_at_5", "value": 0.95}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TenKGnadClusteringP2P", "type": "slvnwhrl/tenkgnad-clustering-p2p", "config": "default", "split": "test", "revision": "5c59e41555244b7e45c9a6be2d720ab4bafae558"}, "metrics": [{"type": "v_measure", "value": 42.84199261133083}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TenKGnadClusteringS2S", "type": "slvnwhrl/tenkgnad-clustering-s2s", "config": "default", "split": "test", "revision": "6cddbe003f12b9b140aec477b583ac4191f01786"}, "metrics": [{"type": "v_measure", "value": 23.689557114798838}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 1.941}, {"type": "map_at_10", "value": 8.222}, {"type": "map_at_100", "value": 14.277999999999999}, {"type": "map_at_1000", "value": 15.790000000000001}, {"type": "map_at_3", "value": 4.4670000000000005}, {"type": "map_at_5", "value": 5.762}, {"type": "mrr_at_1", "value": 24.490000000000002}, {"type": "mrr_at_10", "value": 38.784}, {"type": "mrr_at_100", "value": 39.724}, {"type": "mrr_at_1000", "value": 39.724}, {"type": "mrr_at_3", "value": 33.333}, {"type": "mrr_at_5", "value": 37.415}, {"type": "ndcg_at_1", "value": 22.448999999999998}, {"type": "ndcg_at_10", "value": 21.026}, {"type": "ndcg_at_100", "value": 33.721000000000004}, {"type": "ndcg_at_1000", "value": 45.045}, {"type": "ndcg_at_3", "value": 20.053}, {"type": "ndcg_at_5", "value": 20.09}, {"type": "precision_at_1", "value": 24.490000000000002}, {"type": "precision_at_10", "value": 19.796}, {"type": "precision_at_100", "value": 7.469}, {"type": "precision_at_1000", "value": 1.48}, {"type": "precision_at_3", "value": 21.769}, {"type": "precision_at_5", "value": 21.224}, {"type": "recall_at_1", "value": 1.941}, {"type": "recall_at_10", "value": 14.915999999999999}, {"type": "recall_at_100", "value": 46.155}, {"type": "recall_at_1000", "value": 80.664}, {"type": "recall_at_3", "value": 5.629}, {"type": "recall_at_5", "value": 8.437}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 69.64800000000001}, {"type": "ap", "value": 12.914826731261094}, {"type": "f1", "value": 53.05213503422915}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 60.427277872099594}, {"type": "f1", "value": 60.78292007556828}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 40.48134168406559}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 84.79465935506944}, {"type": "cos_sim_ap", "value": 70.24589055290592}, {"type": "cos_sim_f1", "value": 65.0994575045208}, {"type": "cos_sim_precision", "value": 63.76518218623482}, {"type": "cos_sim_recall", "value": 66.49076517150397}, {"type": "dot_accuracy", "value": 84.63968528342374}, {"type": "dot_ap", "value": 69.84683095084355}, {"type": "dot_f1", "value": 64.50606169727523}, {"type": "dot_precision", "value": 59.1719885487778}, {"type": "dot_recall", "value": 70.89709762532982}, {"type": "euclidean_accuracy", "value": 84.76485664898374}, {"type": "euclidean_ap", "value": 70.20556438685551}, {"type": "euclidean_f1", "value": 65.06796614516543}, {"type": "euclidean_precision", "value": 63.29840319361277}, {"type": "euclidean_recall", "value": 66.93931398416886}, {"type": "manhattan_accuracy", "value": 84.72313286046374}, {"type": "manhattan_ap", "value": 70.17151475534308}, {"type": "manhattan_f1", "value": 65.31379180759113}, {"type": "manhattan_precision", "value": 62.17505366086334}, {"type": "manhattan_recall", "value": 68.7862796833773}, {"type": "max_accuracy", "value": 84.79465935506944}, {"type": "max_ap", "value": 70.24589055290592}, {"type": "max_f1", "value": 65.31379180759113}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.95874568246207}, {"type": "cos_sim_ap", "value": 85.82517548264127}, {"type": "cos_sim_f1", "value": 78.22288041466125}, {"type": "cos_sim_precision", "value": 75.33875338753387}, {"type": "cos_sim_recall", "value": 81.33661841700031}, {"type": "dot_accuracy", "value": 88.836496293709}, {"type": "dot_ap", "value": 85.53430720252186}, {"type": "dot_f1", "value": 78.10616085869725}, {"type": "dot_precision", "value": 74.73269555430501}, {"type": "dot_recall", "value": 81.79858330766862}, {"type": "euclidean_accuracy", "value": 88.92769821865176}, {"type": "euclidean_ap", "value": 85.65904346964223}, {"type": "euclidean_f1", "value": 77.98774074208407}, {"type": "euclidean_precision", "value": 73.72282795035315}, {"type": "euclidean_recall", "value": 82.77640899291654}, {"type": "manhattan_accuracy", "value": 88.86366282454303}, {"type": "manhattan_ap", "value": 85.61599642231819}, {"type": "manhattan_f1", "value": 78.01480509061737}, {"type": "manhattan_precision", "value": 74.10460685833044}, {"type": "manhattan_recall", "value": 82.36064059131506}, {"type": "max_accuracy", "value": 88.95874568246207}, {"type": "max_ap", "value": 85.82517548264127}, {"type": "max_f1", "value": 78.22288041466125}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB WikiCLIR", "type": "None", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 3.9539999999999997}, {"type": "map_at_10", "value": 7.407}, {"type": "map_at_100", "value": 8.677999999999999}, {"type": "map_at_1000", "value": 9.077}, {"type": "map_at_3", "value": 5.987}, {"type": "map_at_5", "value": 6.6979999999999995}, {"type": "mrr_at_1", "value": 35.65}, {"type": "mrr_at_10", "value": 45.097}, {"type": "mrr_at_100", "value": 45.83}, {"type": "mrr_at_1000", "value": 45.871}, {"type": "mrr_at_3", "value": 42.63}, {"type": "mrr_at_5", "value": 44.104}, {"type": "ndcg_at_1", "value": 29.215000000000003}, {"type": "ndcg_at_10", "value": 22.694}, {"type": "ndcg_at_100", "value": 22.242}, {"type": "ndcg_at_1000", "value": 27.069}, {"type": "ndcg_at_3", "value": 27.641}, {"type": "ndcg_at_5", "value": 25.503999999999998}, {"type": "precision_at_1", "value": 35.65}, {"type": "precision_at_10", "value": 12.795000000000002}, {"type": "precision_at_100", "value": 3.354}, {"type": "precision_at_1000", "value": 0.743}, {"type": "precision_at_3", "value": 23.403}, {"type": "precision_at_5", "value": 18.474}, {"type": "recall_at_1", "value": 3.9539999999999997}, {"type": "recall_at_10", "value": 11.301}, {"type": "recall_at_100", "value": 22.919999999999998}, {"type": "recall_at_1000", "value": 40.146}, {"type": "recall_at_3", "value": 7.146}, {"type": "recall_at_5", "value": 8.844000000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB XMarket", "type": "jinaai/xmarket_de", "config": "default", "split": "test", "revision": "2336818db4c06570fcdf263e1bcb9993b786f67a"}, "metrics": [{"type": "map_at_1", "value": 4.872}, {"type": "map_at_10", "value": 10.658}, {"type": "map_at_100", "value": 13.422999999999998}, {"type": "map_at_1000", "value": 14.245}, {"type": "map_at_3", "value": 7.857}, {"type": "map_at_5", "value": 9.142999999999999}, {"type": "mrr_at_1", "value": 16.744999999999997}, {"type": "mrr_at_10", "value": 24.416}, {"type": "mrr_at_100", "value": 25.432}, {"type": "mrr_at_1000", "value": 25.502999999999997}, {"type": "mrr_at_3", "value": 22.096}, {"type": "mrr_at_5", "value": 23.421}, {"type": "ndcg_at_1", "value": 16.695999999999998}, {"type": "ndcg_at_10", "value": 18.66}, {"type": "ndcg_at_100", "value": 24.314}, {"type": "ndcg_at_1000", "value": 29.846}, {"type": "ndcg_at_3", "value": 17.041999999999998}, {"type": "ndcg_at_5", "value": 17.585}, {"type": "precision_at_1", "value": 16.695999999999998}, {"type": "precision_at_10", "value": 10.374}, {"type": "precision_at_100", "value": 3.988}, {"type": "precision_at_1000", "value": 1.1860000000000002}, {"type": "precision_at_3", "value": 14.21}, {"type": "precision_at_5", "value": 12.623000000000001}, {"type": "recall_at_1", "value": 4.872}, {"type": "recall_at_10", "value": 18.624}, {"type": "recall_at_100", "value": 40.988}, {"type": "recall_at_1000", "value": 65.33}, {"type": "recall_at_3", "value": 10.162}, {"type": "recall_at_5", "value": 13.517999999999999}]}]}]}
dataset
null
545
pruas/BENT-PubMedBERT-NER-Cell-Line
pruas
token-classification
[ "transformers", "pytorch", "bert", "token-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-14T14:25:56Z
2024-03-02T10:08:07+00:00
1,534
2
--- language: - en pipeline_tag: token-classification --- Named Entity Recognition (NER) model to recognize cell line entities. Please cite our work: ``` @article{NILNKER2022, title = {NILINKER: Attention-based approach to NIL Entity Linking}, journal = {Journal of Biomedical Informatics}, volume = {132}, pages = {104137}, year = {2022}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2022.104137}, url = {https://www.sciencedirect.com/science/article/pii/S1532046422001526}, author = {Pedro Ruas and Francisco M. Couto}, } ``` [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) fine-tuned on the following datasets: - [CellFinder](http://cellfinder.org/about/annotation/): entity type "CellLine" - [JNLPBA](http://www.geniaproject.org/genia-corpus/term-corpus): entity type "cell_line"
[ "CELLFINDER", "JNLPBA" ]
BioNLP
Named Entity Recognition (NER) model to recognize cell line entities. Please cite our work: ``` @article{NILNKER2022, title = {NILINKER: Attention-based approach to NIL Entity Linking}, journal = {Journal of Biomedical Informatics}, volume = {132}, pages = {104137}, year = {2022}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2022.104137}, url = {https://www.sciencedirect.com/science/article/pii/S1532046422001526}, author = {Pedro Ruas and Francisco M. Couto}, } ``` [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) fine-tuned on the following datasets: - [CellFinder](http://cellfinder.org/about/annotation/): entity type "CellLine" - [JNLPBA](http://www.geniaproject.org/genia-corpus/term-corpus): entity type "cell_line"
{"language": ["en"], "pipeline_tag": "token-classification"}
dataset
null
546
alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline
alonzogarbanzo
text-generation
[ "transformers", "safetensors", "bloom", "text-generation", "generated_from_trainer", "base_model:bigscience/bloom-1b7", "base_model:finetune:bigscience/bloom-1b7", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-02-27T19:47:27Z
2024-02-27T21:29:31+00:00
100
0
--- base_model: bigscience/bloom-1b7 license: bigscience-bloom-rail-1.0 tags: - generated_from_trainer model-index: - name: Bloom-1b7-creative-writing-IT results: [] --- # Bloom-1b7-creative-writing-IT This model is a fine-tuned version of [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) on an a creative writing - short story dataset. https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/creative ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Training and evaluation data here: https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/creative ## Training procedure The model was instruction tuned on the dataset in the following way: Given the set of promts: ``` python prompts = [ "Write a creative short story based on the following title:", "Here is a title for a story. Craft a short narrative around it:", "Using the title given, develop a short story:", "Imagine a short story that starts with this title:", "Create a brief story with the following title:" ] ``` each training example is generated by concatenating one of the prompts with the 'title' and 'selftext' in the following way: ``` python concatenated_texts = [random.choice(prompts) + " " + title + "</s>" + "Story: " + selftext for title, selftext in zip(titles, selftexts)] ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results Final reported loss: {'loss': 0.0135, 'grad_norm': 0.6041152477264404, 'learning_rate': 7.446808510638299e-07, 'epoch': 9.89} Average over tuning: {'train_runtime': 1111.4187, 'train_samples_per_second': 1.71, 'train_steps_per_second': 0.423, 'train_loss': 0.4682149670225509, 'epoch': 9.89} ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "CRAFT" ]
Non_BioNLP
# Bloom-1b7-creative-writing-IT This model is a fine-tuned version of [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) on an a creative writing - short story dataset. https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/creative ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Training and evaluation data here: https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/creative ## Training procedure The model was instruction tuned on the dataset in the following way: Given the set of promts: ``` python prompts = [ "Write a creative short story based on the following title:", "Here is a title for a story. Craft a short narrative around it:", "Using the title given, develop a short story:", "Imagine a short story that starts with this title:", "Create a brief story with the following title:" ] ``` each training example is generated by concatenating one of the prompts with the 'title' and 'selftext' in the following way: ``` python concatenated_texts = [random.choice(prompts) + " " + title + "</s>" + "Story: " + selftext for title, selftext in zip(titles, selftexts)] ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results Final reported loss: {'loss': 0.0135, 'grad_norm': 0.6041152477264404, 'learning_rate': 7.446808510638299e-07, 'epoch': 9.89} Average over tuning: {'train_runtime': 1111.4187, 'train_samples_per_second': 1.71, 'train_steps_per_second': 0.423, 'train_loss': 0.4682149670225509, 'epoch': 9.89} ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"base_model": "bigscience/bloom-1b7", "license": "bigscience-bloom-rail-1.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "Bloom-1b7-creative-writing-IT", "results": []}]}
dataset
null
547
c01zaut/MiniCPM-V-2_6-rk3588-1.1.4
c01zaut
image-text-to-text
[ "transformers", "safetensors", "minicpmv", "feature-extraction", "minicpm-v", "vision", "ocr", "multi-image", "video", "custom_code", "image-text-to-text", "conversational", "multilingual", "dataset:openbmb/RLAIF-V-Dataset", "arxiv:2408.01800", "region:us" ]
2024-11-07T03:47:27Z
2024-12-15T04:26:26+00:00
25
2
--- datasets: - openbmb/RLAIF-V-Dataset language: - multilingual library_name: transformers pipeline_tag: image-text-to-text tags: - minicpm-v - vision - ocr - multi-image - video - custom_code --- # MiniCPM-V-2_6-RK3588-1.1.4 This version of MiniCPM-V-2_6 has been converted to run on the RK3588 NPU using ['w8a8', 'w8a8_g128', 'w8a8_g256', 'w8a8_g512'] quantization. This model has been optimized with the following LoRA: Compatible with RKLLM version: 1.1.4 ## Useful links: [Official RKLLM GitHub](https://github.com/airockchip/rknn-llm) [RockhipNPU Reddit](https://reddit.com/r/RockchipNPU) [EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/) Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531) Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit # Original Model Card for base model, MiniCPM-V-2_6, below: <h1>A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone</h1> [GitHub](https://github.com/OpenBMB/MiniCPM-V) | [Demo](http://120.92.209.146:8887/)</a> ## MiniCPM-V 2.6 **MiniCPM-V 2.6** is the latest and most capable model in the MiniCPM-V series. The model is built on SigLip-400M and Qwen2-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-Llama3-V 2.5, and introduces new features for multi-image and video understanding. Notable features of MiniCPM-V 2.6 include: - 🔥 **Leading Performance.** MiniCPM-V 2.6 achieves an average score of 65.2 on the latest version of OpenCompass, a comprehensive evaluation over 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o mini, GPT-4V, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding. - 🖼️ **Multi Image Understanding and In-context Learning.** MiniCPM-V 2.6 can also perform **conversation and reasoning over multiple images**. It achieves **state-of-the-art performance** on popular multi-image benchmarks such as Mantis-Eval, BLINK, Mathverse mv and Sciverse mv, and also shows promising in-context learning capability. - 🎬 **Video Understanding.** MiniCPM-V 2.6 can also **accept video inputs**, performing conversation and providing dense captions for spatial-temporal information. It outperforms **GPT-4V, Claude 3.5 Sonnet and LLaVA-NeXT-Video-34B** on Video-MME with/without subtitles. - 💪 **Strong OCR Capability and Others.** MiniCPM-V 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V, and Gemini 1.5 Pro**. Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, with significantly lower hallucination rates than GPT-4o and GPT-4V on Object HalBench, and supports **multilingual capabilities** on English, Chinese, German, French, Italian, Korean, etc. - 🚀 **Superior Efficiency.** In addition to its friendly size, MiniCPM-V 2.6 also shows **state-of-the-art token density** (i.e., number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-V 2.6 can efficiently support **real-time video understanding** on end-side devices such as iPad. - 💫 **Easy Usage.** MiniCPM-V 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpmv-main/examples/llava/README-minicpmv2.6.md) and [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.6) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#inference-with-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks, (5) quick local WebUI demo setup with [Gradio](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#chat-with-our-demo-on-gradio) and (6) online web [demo](http://120.92.209.146:8887). ### Evaluation <!-- omit in toc --> <div align="center"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/radar_final.png" width=66% /> </div> #### Single image results on OpenCompass, MME, MMVet, OCRBench, MMMU, MathVista, MMB, AI2D, TextVQA, DocVQA, HallusionBench, Object HalBench: <div align="center"> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/QVl0iPtT5aUhlvViyEpgs.png) </div> <sup>*</sup> We evaluate this benchmark using chain-of-thought prompting. <sup>+</sup> Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens. Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation. #### Multi-image results on Mantis Eval, BLINK Val, Mathverse mv, Sciverse mv, MIRB: <div align="center"> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/o6FGHytRhzeatmhxq0Dbi.png) </div> <sup>*</sup> We evaluate the officially released checkpoint by ourselves. #### Video results on Video-MME and Video-ChatGPT: <div align="center"> <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/_T1mw5yhqNCqVdYRTQOGu.png) --> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/jmrjoRr8SFLkrstjDmpaV.png) </div> <details> <summary>Click to view few-shot results on TextVQA, VizWiz, VQAv2, OK-VQA.</summary> <div align="center"> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/zXIuiCTTe-POqKGHszdn0.png) </div> * denotes zero image shot and two additional text shots following Flamingo. <sup>+</sup> We evaluate the pretraining ckpt without SFT. </details> ### Examples <!-- omit in toc --> <div style="display: flex; flex-direction: column; align-items: center;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-bike.png" alt="Bike" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-menu.png" alt="Menu" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-code.png" alt="Code" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/ICL-Mem.png" alt="Mem" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multiling-medal.png" alt="medal" style="margin-bottom: 10px;"> </div> <details> <summary>Click to view more cases.</summary> <div style="display: flex; flex-direction: column; align-items: center;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/ICL-elec.png" alt="elec" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multiling-olympic.png" alt="Menu" style="margin-bottom: 10px;"> </div> </details> We deploy MiniCPM-V 2.6 on end devices. The demo video is the raw screen recording on a iPad Pro without edition. <div style="display: flex; justify-content: center;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/ai.gif" width="48%" style="margin: 0 10px;"/> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/beer.gif" width="48%" style="margin: 0 10px;"/> </div> <div style="display: flex; justify-content: center; margin-top: 20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/ticket.gif" width="48%" style="margin: 0 10px;"/> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/wfh.gif" width="48%" style="margin: 0 10px;"/> </div> <div style="text-align: center;"> <video controls autoplay src="https://hf.fast360.xyz/production/uploads/64abc4aa6cadc7aca585dddf/mXAEFQFqNd4nnvPk7r5eX.mp4"></video> <!-- <video controls autoplay src="https://hf.fast360.xyz/production/uploads/64abc4aa6cadc7aca585dddf/fEWzfHUdKnpkM7sdmnBQa.mp4"></video> --> </div> ## Demo Click here to try the Demo of [MiniCPM-V 2.6](http://120.92.209.146:8887/). ## Usage Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10: ``` Pillow==10.1.0 torch==2.1.2 torchvision==0.16.2 transformers==4.40.0 sentencepiece==0.1.99 decord ``` ```python # test.py import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) image = Image.open('xx.jpg').convert('RGB') question = 'What is in the image?' msgs = [{'role': 'user', 'content': [image, question]}] res = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(res) ## if you want to use streaming, please make sure sampling=True and stream=True ## the model.chat will return a generator res = model.chat( image=None, msgs=msgs, tokenizer=tokenizer, sampling=True, stream=True ) generated_text = "" for new_text in res: generated_text += new_text print(new_text, flush=True, end='') ``` ### Chat with multiple images <details> <summary> Click to show Python code running MiniCPM-V 2.6 with multiple images input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) image1 = Image.open('image1.jpg').convert('RGB') image2 = Image.open('image2.jpg').convert('RGB') question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.' msgs = [{'role': 'user', 'content': [image1, image2, question]}] answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(answer) ``` </details> ### In-context few-shot learning <details> <summary> Click to view Python code running MiniCPM-V 2.6 with few-shot input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) question = "production date" image1 = Image.open('example1.jpg').convert('RGB') answer1 = "2023.08.04" image2 = Image.open('example2.jpg').convert('RGB') answer2 = "2007.04.24" image_test = Image.open('test.jpg').convert('RGB') msgs = [ {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]}, {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]}, {'role': 'user', 'content': [image_test, question]} ] answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(answer) ``` </details> ### Chat with video <details> <summary> Click to view Python code running MiniCPM-V 2.6 with video input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer from decord import VideoReader, cpu # pip install decord model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) MAX_NUM_FRAMES=64 # if cuda OOM set a smaller number def encode_video(video_path): def uniform_sample(l, n): gap = len(l) / n idxs = [int(i * gap + gap / 2) for i in range(n)] return [l[i] for i in idxs] vr = VideoReader(video_path, ctx=cpu(0)) sample_fps = round(vr.get_avg_fps() / 1) # FPS frame_idx = [i for i in range(0, len(vr), sample_fps)] if len(frame_idx) > MAX_NUM_FRAMES: frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES) frames = vr.get_batch(frame_idx).asnumpy() frames = [Image.fromarray(v.astype('uint8')) for v in frames] print('num frames:', len(frames)) return frames video_path ="video_test.mp4" frames = encode_video(video_path) question = "Describe the video" msgs = [ {'role': 'user', 'content': frames + [question]}, ] # Set decode params for video params={} params["use_image_id"] = False params["max_slice_nums"] = 2 # use 1 if cuda OOM and video resolution > 448*448 answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer, **params ) print(answer) ``` </details> Please look at [GitHub](https://github.com/OpenBMB/MiniCPM-V) for more detail about usage. ## Inference with llama.cpp<a id="llamacpp"></a> MiniCPM-V 2.6 can run with llama.cpp. See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv) for more detail. ## Int4 quantized version Download the int4 quantized version for lower GPU memory (7GB) usage: [MiniCPM-V-2_6-int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4). ## License #### Model License * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. * The usage of MiniCPM-V series model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md). * The models and weights of MiniCPM are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-V 2.6 weights are also available for free commercial use. #### Statement * As an LMM, MiniCPM-V 2.6 generates contents by learning a large mount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V 2.6 does not represent the views and positions of the model developers * We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. ## Key Techniques and Other Multimodal Projects 👏 Welcome to explore key techniques of MiniCPM-V 2.6 and other multimodal projects of our team: [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V) ## Citation If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️! ```bib @article{yao2024minicpm, title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone}, author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others}, journal={arXiv preprint arXiv:2408.01800}, year={2024} } ```
[ "MEDAL" ]
Non_BioNLP
# MiniCPM-V-2_6-RK3588-1.1.4 This version of MiniCPM-V-2_6 has been converted to run on the RK3588 NPU using ['w8a8', 'w8a8_g128', 'w8a8_g256', 'w8a8_g512'] quantization. This model has been optimized with the following LoRA: Compatible with RKLLM version: 1.1.4 ## Useful links: [Official RKLLM GitHub](https://github.com/airockchip/rknn-llm) [RockhipNPU Reddit](https://reddit.com/r/RockchipNPU) [EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/) Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531) Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit # Original Model Card for base model, MiniCPM-V-2_6, below: <h1>A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone</h1> [GitHub](https://github.com/OpenBMB/MiniCPM-V) | [Demo](http://120.92.209.146:8887/)</a> ## MiniCPM-V 2.6 **MiniCPM-V 2.6** is the latest and most capable model in the MiniCPM-V series. The model is built on SigLip-400M and Qwen2-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-Llama3-V 2.5, and introduces new features for multi-image and video understanding. Notable features of MiniCPM-V 2.6 include: - 🔥 **Leading Performance.** MiniCPM-V 2.6 achieves an average score of 65.2 on the latest version of OpenCompass, a comprehensive evaluation over 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o mini, GPT-4V, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding. - 🖼️ **Multi Image Understanding and In-context Learning.** MiniCPM-V 2.6 can also perform **conversation and reasoning over multiple images**. It achieves **state-of-the-art performance** on popular multi-image benchmarks such as Mantis-Eval, BLINK, Mathverse mv and Sciverse mv, and also shows promising in-context learning capability. - 🎬 **Video Understanding.** MiniCPM-V 2.6 can also **accept video inputs**, performing conversation and providing dense captions for spatial-temporal information. It outperforms **GPT-4V, Claude 3.5 Sonnet and LLaVA-NeXT-Video-34B** on Video-MME with/without subtitles. - 💪 **Strong OCR Capability and Others.** MiniCPM-V 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V, and Gemini 1.5 Pro**. Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, with significantly lower hallucination rates than GPT-4o and GPT-4V on Object HalBench, and supports **multilingual capabilities** on English, Chinese, German, French, Italian, Korean, etc. - 🚀 **Superior Efficiency.** In addition to its friendly size, MiniCPM-V 2.6 also shows **state-of-the-art token density** (i.e., number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-V 2.6 can efficiently support **real-time video understanding** on end-side devices such as iPad. - 💫 **Easy Usage.** MiniCPM-V 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpmv-main/examples/llava/README-minicpmv2.6.md) and [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.6) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#inference-with-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks, (5) quick local WebUI demo setup with [Gradio](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#chat-with-our-demo-on-gradio) and (6) online web [demo](http://120.92.209.146:8887). ### Evaluation <!-- omit in toc --> <div align="center"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/radar_final.png" width=66% /> </div> #### Single image results on OpenCompass, MME, MMVet, OCRBench, MMMU, MathVista, MMB, AI2D, TextVQA, DocVQA, HallusionBench, Object HalBench: <div align="center"> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/QVl0iPtT5aUhlvViyEpgs.png) </div> <sup>*</sup> We evaluate this benchmark using chain-of-thought prompting. <sup>+</sup> Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens. Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation. #### Multi-image results on Mantis Eval, BLINK Val, Mathverse mv, Sciverse mv, MIRB: <div align="center"> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/o6FGHytRhzeatmhxq0Dbi.png) </div> <sup>*</sup> We evaluate the officially released checkpoint by ourselves. #### Video results on Video-MME and Video-ChatGPT: <div align="center"> <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/_T1mw5yhqNCqVdYRTQOGu.png) --> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/jmrjoRr8SFLkrstjDmpaV.png) </div> <details> <summary>Click to view few-shot results on TextVQA, VizWiz, VQAv2, OK-VQA.</summary> <div align="center"> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/zXIuiCTTe-POqKGHszdn0.png) </div> * denotes zero image shot and two additional text shots following Flamingo. <sup>+</sup> We evaluate the pretraining ckpt without SFT. </details> ### Examples <!-- omit in toc --> <div style="display: flex; flex-direction: column; align-items: center;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-bike.png" alt="Bike" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-menu.png" alt="Menu" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-code.png" alt="Code" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/ICL-Mem.png" alt="Mem" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multiling-medal.png" alt="medal" style="margin-bottom: 10px;"> </div> <details> <summary>Click to view more cases.</summary> <div style="display: flex; flex-direction: column; align-items: center;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/ICL-elec.png" alt="elec" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multiling-olympic.png" alt="Menu" style="margin-bottom: 10px;"> </div> </details> We deploy MiniCPM-V 2.6 on end devices. The demo video is the raw screen recording on a iPad Pro without edition. <div style="display: flex; justify-content: center;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/ai.gif" width="48%" style="margin: 0 10px;"/> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/beer.gif" width="48%" style="margin: 0 10px;"/> </div> <div style="display: flex; justify-content: center; margin-top: 20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/ticket.gif" width="48%" style="margin: 0 10px;"/> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/wfh.gif" width="48%" style="margin: 0 10px;"/> </div> <div style="text-align: center;"> <video controls autoplay src="https://hf.fast360.xyz/production/uploads/64abc4aa6cadc7aca585dddf/mXAEFQFqNd4nnvPk7r5eX.mp4"></video> <!-- <video controls autoplay src="https://hf.fast360.xyz/production/uploads/64abc4aa6cadc7aca585dddf/fEWzfHUdKnpkM7sdmnBQa.mp4"></video> --> </div> ## Demo Click here to try the Demo of [MiniCPM-V 2.6](http://120.92.209.146:8887/). ## Usage Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10: ``` Pillow==10.1.0 torch==2.1.2 torchvision==0.16.2 transformers==4.40.0 sentencepiece==0.1.99 decord ``` ```python # test.py import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) image = Image.open('xx.jpg').convert('RGB') question = 'What is in the image?' msgs = [{'role': 'user', 'content': [image, question]}] res = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(res) ## if you want to use streaming, please make sure sampling=True and stream=True ## the model.chat will return a generator res = model.chat( image=None, msgs=msgs, tokenizer=tokenizer, sampling=True, stream=True ) generated_text = "" for new_text in res: generated_text += new_text print(new_text, flush=True, end='') ``` ### Chat with multiple images <details> <summary> Click to show Python code running MiniCPM-V 2.6 with multiple images input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) image1 = Image.open('image1.jpg').convert('RGB') image2 = Image.open('image2.jpg').convert('RGB') question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.' msgs = [{'role': 'user', 'content': [image1, image2, question]}] answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(answer) ``` </details> ### In-context few-shot learning <details> <summary> Click to view Python code running MiniCPM-V 2.6 with few-shot input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) question = "production date" image1 = Image.open('example1.jpg').convert('RGB') answer1 = "2023.08.04" image2 = Image.open('example2.jpg').convert('RGB') answer2 = "2007.04.24" image_test = Image.open('test.jpg').convert('RGB') msgs = [ {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]}, {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]}, {'role': 'user', 'content': [image_test, question]} ] answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(answer) ``` </details> ### Chat with video <details> <summary> Click to view Python code running MiniCPM-V 2.6 with video input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer from decord import VideoReader, cpu # pip install decord model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) MAX_NUM_FRAMES=64 # if cuda OOM set a smaller number def encode_video(video_path): def uniform_sample(l, n): gap = len(l) / n idxs = [int(i * gap + gap / 2) for i in range(n)] return [l[i] for i in idxs] vr = VideoReader(video_path, ctx=cpu(0)) sample_fps = round(vr.get_avg_fps() / 1) # FPS frame_idx = [i for i in range(0, len(vr), sample_fps)] if len(frame_idx) > MAX_NUM_FRAMES: frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES) frames = vr.get_batch(frame_idx).asnumpy() frames = [Image.fromarray(v.astype('uint8')) for v in frames] print('num frames:', len(frames)) return frames video_path ="video_test.mp4" frames = encode_video(video_path) question = "Describe the video" msgs = [ {'role': 'user', 'content': frames + [question]}, ] # Set decode params for video params={} params["use_image_id"] = False params["max_slice_nums"] = 2 # use 1 if cuda OOM and video resolution > 448*448 answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer, **params ) print(answer) ``` </details> Please look at [GitHub](https://github.com/OpenBMB/MiniCPM-V) for more detail about usage. ## Inference with llama.cpp<a id="llamacpp"></a> MiniCPM-V 2.6 can run with llama.cpp. See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv) for more detail. ## Int4 quantized version Download the int4 quantized version for lower GPU memory (7GB) usage: [MiniCPM-V-2_6-int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4). ## License #### Model License * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. * The usage of MiniCPM-V series model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md). * The models and weights of MiniCPM are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-V 2.6 weights are also available for free commercial use. #### Statement * As an LMM, MiniCPM-V 2.6 generates contents by learning a large mount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V 2.6 does not represent the views and positions of the model developers * We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. ## Key Techniques and Other Multimodal Projects 👏 Welcome to explore key techniques of MiniCPM-V 2.6 and other multimodal projects of our team: [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V) ## Citation If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️! ```bib @article{yao2024minicpm, title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone}, author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others}, journal={arXiv preprint arXiv:2408.01800}, year={2024} } ```
{"datasets": ["openbmb/RLAIF-V-Dataset"], "language": ["multilingual"], "library_name": "transformers", "pipeline_tag": "image-text-to-text", "tags": ["minicpm-v", "vision", "ocr", "multi-image", "video", "custom_code"]}
dataset
null
548
Carmen000/dreambooth_lora_live_teddybear11
Carmen000
text-to-image
[ "diffusers", "text-to-image", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
2024-08-26T21:05:34Z
2024-08-26T21:39:29+00:00
0
0
--- base_model: runwayml/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - diffusers - lora - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: A photo of sks teddy bear --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - Carmen000/dreambooth_lora_live_teddybear11 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on A photo of sks teddy bear using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "BEAR" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - Carmen000/dreambooth_lora_live_teddybear11 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on A photo of sks teddy bear using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"base_model": "runwayml/stable-diffusion-v1-5", "library_name": "diffusers", "license": "creativeml-openrail-m", "tags": ["text-to-image", "diffusers", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "instance_prompt": "A photo of sks teddy bear"}
dataset
null
549
twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF
twine-network
sentence-similarity
[ "sentence-transformers", "gguf", "feature-extraction", "mteb", "sentence-similarity", "transformers", "llama-cpp", "gguf-my-repo", "en", "base_model:avsolatorio/NoInstruct-small-Embedding-v0", "base_model:quantized:avsolatorio/NoInstruct-small-Embedding-v0", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-11-03T19:50:12Z
2024-11-03T19:50:14+00:00
21
0
--- base_model: avsolatorio/NoInstruct-small-Embedding-v0 language: - en library_name: sentence-transformers license: mit pipeline_tag: sentence-similarity tags: - feature-extraction - mteb - sentence-similarity - sentence-transformers - transformers - llama-cpp - gguf-my-repo model-index: - name: NoInstruct-small-Embedding-v0 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.76119402985074 - type: ap value: 39.03628777559392 - type: f1 value: 69.85860402259618 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.29920000000001 - type: ap value: 90.03479490717608 - type: f1 value: 93.28554395248467 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 49.98799999999999 - type: f1 value: 49.46151232451642 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 31.935000000000002 - type: map_at_10 value: 48.791000000000004 - type: map_at_100 value: 49.619 - type: map_at_1000 value: 49.623 - type: map_at_3 value: 44.334 - type: map_at_5 value: 46.908 - type: mrr_at_1 value: 32.93 - type: mrr_at_10 value: 49.158 - type: mrr_at_100 value: 50.00599999999999 - type: mrr_at_1000 value: 50.01 - type: mrr_at_3 value: 44.618 - type: mrr_at_5 value: 47.325 - type: ndcg_at_1 value: 31.935000000000002 - type: ndcg_at_10 value: 57.593 - type: ndcg_at_100 value: 60.841 - type: ndcg_at_1000 value: 60.924 - type: ndcg_at_3 value: 48.416 - type: ndcg_at_5 value: 53.05 - type: precision_at_1 value: 31.935000000000002 - type: precision_at_10 value: 8.549 - type: precision_at_100 value: 0.9900000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.081 - type: precision_at_5 value: 14.296000000000001 - type: recall_at_1 value: 31.935000000000002 - type: recall_at_10 value: 85.491 - type: recall_at_100 value: 99.004 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.242 - type: recall_at_5 value: 71.479 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.78438534940855 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.12916178519471 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.125361608299855 - type: mrr value: 74.92525172580574 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.64322910336641 - type: cos_sim_spearman value: 87.20138453306345 - type: euclidean_pearson value: 87.08547818178234 - type: euclidean_spearman value: 87.17066094143931 - type: manhattan_pearson value: 87.30053110771618 - type: manhattan_spearman value: 86.86824441211934 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.3961038961039 - type: f1 value: 86.3669961645295 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.40291404289857 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 35.102356817746816 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: mteb/cqadupstack-android config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 31.013 - type: map_at_10 value: 42.681999999999995 - type: map_at_100 value: 44.24 - type: map_at_1000 value: 44.372 - type: map_at_3 value: 39.181 - type: map_at_5 value: 41.071999999999996 - type: mrr_at_1 value: 38.196999999999996 - type: mrr_at_10 value: 48.604 - type: mrr_at_100 value: 49.315 - type: mrr_at_1000 value: 49.363 - type: mrr_at_3 value: 45.756 - type: mrr_at_5 value: 47.43 - type: ndcg_at_1 value: 38.196999999999996 - type: ndcg_at_10 value: 49.344 - type: ndcg_at_100 value: 54.662 - type: ndcg_at_1000 value: 56.665 - type: ndcg_at_3 value: 44.146 - type: ndcg_at_5 value: 46.514 - type: precision_at_1 value: 38.196999999999996 - type: precision_at_10 value: 9.571 - type: precision_at_100 value: 1.542 - type: precision_at_1000 value: 0.202 - type: precision_at_3 value: 21.364 - type: precision_at_5 value: 15.336 - type: recall_at_1 value: 31.013 - type: recall_at_10 value: 61.934999999999995 - type: recall_at_100 value: 83.923 - type: recall_at_1000 value: 96.601 - type: recall_at_3 value: 46.86 - type: recall_at_5 value: 53.620000000000005 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: mteb/cqadupstack-english config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 29.84 - type: map_at_10 value: 39.335 - type: map_at_100 value: 40.647 - type: map_at_1000 value: 40.778 - type: map_at_3 value: 36.556 - type: map_at_5 value: 38.048 - type: mrr_at_1 value: 36.815 - type: mrr_at_10 value: 45.175 - type: mrr_at_100 value: 45.907 - type: mrr_at_1000 value: 45.946999999999996 - type: mrr_at_3 value: 42.909000000000006 - type: mrr_at_5 value: 44.227 - type: ndcg_at_1 value: 36.815 - type: ndcg_at_10 value: 44.783 - type: ndcg_at_100 value: 49.551 - type: ndcg_at_1000 value: 51.612 - type: ndcg_at_3 value: 40.697 - type: ndcg_at_5 value: 42.558 - type: precision_at_1 value: 36.815 - type: precision_at_10 value: 8.363 - type: precision_at_100 value: 1.385 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 19.342000000000002 - type: precision_at_5 value: 13.706999999999999 - type: recall_at_1 value: 29.84 - type: recall_at_10 value: 54.164 - type: recall_at_100 value: 74.36 - type: recall_at_1000 value: 87.484 - type: recall_at_3 value: 42.306 - type: recall_at_5 value: 47.371 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: mteb/cqadupstack-gaming config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 39.231 - type: map_at_10 value: 51.44800000000001 - type: map_at_100 value: 52.574 - type: map_at_1000 value: 52.629999999999995 - type: map_at_3 value: 48.077 - type: map_at_5 value: 50.019000000000005 - type: mrr_at_1 value: 44.89 - type: mrr_at_10 value: 54.803000000000004 - type: mrr_at_100 value: 55.556000000000004 - type: mrr_at_1000 value: 55.584 - type: mrr_at_3 value: 52.32 - type: mrr_at_5 value: 53.846000000000004 - type: ndcg_at_1 value: 44.89 - type: ndcg_at_10 value: 57.228 - type: ndcg_at_100 value: 61.57 - type: ndcg_at_1000 value: 62.613 - type: ndcg_at_3 value: 51.727000000000004 - type: ndcg_at_5 value: 54.496 - type: precision_at_1 value: 44.89 - type: precision_at_10 value: 9.266 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 23.051 - type: precision_at_5 value: 15.987000000000002 - type: recall_at_1 value: 39.231 - type: recall_at_10 value: 70.82000000000001 - type: recall_at_100 value: 89.446 - type: recall_at_1000 value: 96.665 - type: recall_at_3 value: 56.40500000000001 - type: recall_at_5 value: 62.993 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: mteb/cqadupstack-gis config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 25.296000000000003 - type: map_at_10 value: 34.021 - type: map_at_100 value: 35.158 - type: map_at_1000 value: 35.233 - type: map_at_3 value: 31.424999999999997 - type: map_at_5 value: 33.046 - type: mrr_at_1 value: 27.232 - type: mrr_at_10 value: 36.103 - type: mrr_at_100 value: 37.076 - type: mrr_at_1000 value: 37.135 - type: mrr_at_3 value: 33.635 - type: mrr_at_5 value: 35.211 - type: ndcg_at_1 value: 27.232 - type: ndcg_at_10 value: 38.878 - type: ndcg_at_100 value: 44.284 - type: ndcg_at_1000 value: 46.268 - type: ndcg_at_3 value: 33.94 - type: ndcg_at_5 value: 36.687 - type: precision_at_1 value: 27.232 - type: precision_at_10 value: 5.921 - type: precision_at_100 value: 0.907 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 14.426 - type: precision_at_5 value: 10.215 - type: recall_at_1 value: 25.296000000000003 - type: recall_at_10 value: 51.708 - type: recall_at_100 value: 76.36699999999999 - type: recall_at_1000 value: 91.306 - type: recall_at_3 value: 38.651 - type: recall_at_5 value: 45.201 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: mteb/cqadupstack-mathematica config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 16.24 - type: map_at_10 value: 24.696 - type: map_at_100 value: 25.945 - type: map_at_1000 value: 26.069 - type: map_at_3 value: 22.542 - type: map_at_5 value: 23.526 - type: mrr_at_1 value: 20.149 - type: mrr_at_10 value: 29.584 - type: mrr_at_100 value: 30.548 - type: mrr_at_1000 value: 30.618000000000002 - type: mrr_at_3 value: 27.301 - type: mrr_at_5 value: 28.563 - type: ndcg_at_1 value: 20.149 - type: ndcg_at_10 value: 30.029 - type: ndcg_at_100 value: 35.812 - type: ndcg_at_1000 value: 38.755 - type: ndcg_at_3 value: 26.008 - type: ndcg_at_5 value: 27.517000000000003 - type: precision_at_1 value: 20.149 - type: precision_at_10 value: 5.647 - type: precision_at_100 value: 0.968 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 12.934999999999999 - type: precision_at_5 value: 8.955 - type: recall_at_1 value: 16.24 - type: recall_at_10 value: 41.464 - type: recall_at_100 value: 66.781 - type: recall_at_1000 value: 87.85300000000001 - type: recall_at_3 value: 29.822 - type: recall_at_5 value: 34.096 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: mteb/cqadupstack-physics config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 29.044999999999998 - type: map_at_10 value: 39.568999999999996 - type: map_at_100 value: 40.831 - type: map_at_1000 value: 40.948 - type: map_at_3 value: 36.495 - type: map_at_5 value: 38.21 - type: mrr_at_1 value: 35.611 - type: mrr_at_10 value: 45.175 - type: mrr_at_100 value: 45.974 - type: mrr_at_1000 value: 46.025 - type: mrr_at_3 value: 42.765 - type: mrr_at_5 value: 44.151 - type: ndcg_at_1 value: 35.611 - type: ndcg_at_10 value: 45.556999999999995 - type: ndcg_at_100 value: 50.86000000000001 - type: ndcg_at_1000 value: 52.983000000000004 - type: ndcg_at_3 value: 40.881 - type: ndcg_at_5 value: 43.035000000000004 - type: precision_at_1 value: 35.611 - type: precision_at_10 value: 8.306 - type: precision_at_100 value: 1.276 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 19.57 - type: precision_at_5 value: 13.725000000000001 - type: recall_at_1 value: 29.044999999999998 - type: recall_at_10 value: 57.513999999999996 - type: recall_at_100 value: 80.152 - type: recall_at_1000 value: 93.982 - type: recall_at_3 value: 44.121 - type: recall_at_5 value: 50.007000000000005 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: mteb/cqadupstack-programmers config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 22.349 - type: map_at_10 value: 33.434000000000005 - type: map_at_100 value: 34.8 - type: map_at_1000 value: 34.919 - type: map_at_3 value: 30.348000000000003 - type: map_at_5 value: 31.917 - type: mrr_at_1 value: 28.195999999999998 - type: mrr_at_10 value: 38.557 - type: mrr_at_100 value: 39.550999999999995 - type: mrr_at_1000 value: 39.607 - type: mrr_at_3 value: 36.035000000000004 - type: mrr_at_5 value: 37.364999999999995 - type: ndcg_at_1 value: 28.195999999999998 - type: ndcg_at_10 value: 39.656000000000006 - type: ndcg_at_100 value: 45.507999999999996 - type: ndcg_at_1000 value: 47.848 - type: ndcg_at_3 value: 34.609 - type: ndcg_at_5 value: 36.65 - type: precision_at_1 value: 28.195999999999998 - type: precision_at_10 value: 7.534000000000001 - type: precision_at_100 value: 1.217 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.085 - type: precision_at_5 value: 12.169 - type: recall_at_1 value: 22.349 - type: recall_at_10 value: 53.127 - type: recall_at_100 value: 77.884 - type: recall_at_1000 value: 93.705 - type: recall_at_3 value: 38.611000000000004 - type: recall_at_5 value: 44.182 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 25.215749999999996 - type: map_at_10 value: 34.332750000000004 - type: map_at_100 value: 35.58683333333333 - type: map_at_1000 value: 35.70458333333333 - type: map_at_3 value: 31.55441666666667 - type: map_at_5 value: 33.100833333333334 - type: mrr_at_1 value: 29.697250000000004 - type: mrr_at_10 value: 38.372249999999994 - type: mrr_at_100 value: 39.26708333333334 - type: mrr_at_1000 value: 39.3265 - type: mrr_at_3 value: 35.946083333333334 - type: mrr_at_5 value: 37.336999999999996 - type: ndcg_at_1 value: 29.697250000000004 - type: ndcg_at_10 value: 39.64575 - type: ndcg_at_100 value: 44.996833333333335 - type: ndcg_at_1000 value: 47.314499999999995 - type: ndcg_at_3 value: 34.93383333333334 - type: ndcg_at_5 value: 37.15291666666667 - type: precision_at_1 value: 29.697250000000004 - type: precision_at_10 value: 6.98825 - type: precision_at_100 value: 1.138 - type: precision_at_1000 value: 0.15283333333333332 - type: precision_at_3 value: 16.115583333333333 - type: precision_at_5 value: 11.460916666666666 - type: recall_at_1 value: 25.215749999999996 - type: recall_at_10 value: 51.261250000000004 - type: recall_at_100 value: 74.67258333333334 - type: recall_at_1000 value: 90.72033333333334 - type: recall_at_3 value: 38.1795 - type: recall_at_5 value: 43.90658333333334 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: mteb/cqadupstack-stats config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 24.352 - type: map_at_10 value: 30.576999999999998 - type: map_at_100 value: 31.545 - type: map_at_1000 value: 31.642 - type: map_at_3 value: 28.605000000000004 - type: map_at_5 value: 29.828 - type: mrr_at_1 value: 26.994 - type: mrr_at_10 value: 33.151 - type: mrr_at_100 value: 33.973 - type: mrr_at_1000 value: 34.044999999999995 - type: mrr_at_3 value: 31.135 - type: mrr_at_5 value: 32.262 - type: ndcg_at_1 value: 26.994 - type: ndcg_at_10 value: 34.307 - type: ndcg_at_100 value: 39.079 - type: ndcg_at_1000 value: 41.548 - type: ndcg_at_3 value: 30.581000000000003 - type: ndcg_at_5 value: 32.541 - type: precision_at_1 value: 26.994 - type: precision_at_10 value: 5.244999999999999 - type: precision_at_100 value: 0.831 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 12.781 - type: precision_at_5 value: 9.017999999999999 - type: recall_at_1 value: 24.352 - type: recall_at_10 value: 43.126999999999995 - type: recall_at_100 value: 64.845 - type: recall_at_1000 value: 83.244 - type: recall_at_3 value: 33.308 - type: recall_at_5 value: 37.984 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: mteb/cqadupstack-tex config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 16.592000000000002 - type: map_at_10 value: 23.29 - type: map_at_100 value: 24.423000000000002 - type: map_at_1000 value: 24.554000000000002 - type: map_at_3 value: 20.958 - type: map_at_5 value: 22.267 - type: mrr_at_1 value: 20.061999999999998 - type: mrr_at_10 value: 26.973999999999997 - type: mrr_at_100 value: 27.944999999999997 - type: mrr_at_1000 value: 28.023999999999997 - type: mrr_at_3 value: 24.839 - type: mrr_at_5 value: 26.033 - type: ndcg_at_1 value: 20.061999999999998 - type: ndcg_at_10 value: 27.682000000000002 - type: ndcg_at_100 value: 33.196 - type: ndcg_at_1000 value: 36.246 - type: ndcg_at_3 value: 23.559 - type: ndcg_at_5 value: 25.507 - type: precision_at_1 value: 20.061999999999998 - type: precision_at_10 value: 5.086 - type: precision_at_100 value: 0.9249999999999999 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 11.046 - type: precision_at_5 value: 8.149000000000001 - type: recall_at_1 value: 16.592000000000002 - type: recall_at_10 value: 37.181999999999995 - type: recall_at_100 value: 62.224999999999994 - type: recall_at_1000 value: 84.072 - type: recall_at_3 value: 25.776 - type: recall_at_5 value: 30.680000000000003 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: mteb/cqadupstack-unix config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 26.035999999999998 - type: map_at_10 value: 34.447 - type: map_at_100 value: 35.697 - type: map_at_1000 value: 35.802 - type: map_at_3 value: 31.64 - type: map_at_5 value: 33.056999999999995 - type: mrr_at_1 value: 29.851 - type: mrr_at_10 value: 38.143 - type: mrr_at_100 value: 39.113 - type: mrr_at_1000 value: 39.175 - type: mrr_at_3 value: 35.665 - type: mrr_at_5 value: 36.901 - type: ndcg_at_1 value: 29.851 - type: ndcg_at_10 value: 39.554 - type: ndcg_at_100 value: 45.091 - type: ndcg_at_1000 value: 47.504000000000005 - type: ndcg_at_3 value: 34.414 - type: ndcg_at_5 value: 36.508 - type: precision_at_1 value: 29.851 - type: precision_at_10 value: 6.614000000000001 - type: precision_at_100 value: 1.051 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 15.329999999999998 - type: precision_at_5 value: 10.671999999999999 - type: recall_at_1 value: 26.035999999999998 - type: recall_at_10 value: 51.396 - type: recall_at_100 value: 75.09 - type: recall_at_1000 value: 91.904 - type: recall_at_3 value: 37.378 - type: recall_at_5 value: 42.69 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: mteb/cqadupstack-webmasters config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 23.211000000000002 - type: map_at_10 value: 32.231 - type: map_at_100 value: 33.772999999999996 - type: map_at_1000 value: 33.982 - type: map_at_3 value: 29.128 - type: map_at_5 value: 31.002999999999997 - type: mrr_at_1 value: 27.668 - type: mrr_at_10 value: 36.388 - type: mrr_at_100 value: 37.384 - type: mrr_at_1000 value: 37.44 - type: mrr_at_3 value: 33.762 - type: mrr_at_5 value: 35.234 - type: ndcg_at_1 value: 27.668 - type: ndcg_at_10 value: 38.043 - type: ndcg_at_100 value: 44.21 - type: ndcg_at_1000 value: 46.748 - type: ndcg_at_3 value: 32.981 - type: ndcg_at_5 value: 35.58 - type: precision_at_1 value: 27.668 - type: precision_at_10 value: 7.352 - type: precision_at_100 value: 1.5 - type: precision_at_1000 value: 0.23700000000000002 - type: precision_at_3 value: 15.613 - type: precision_at_5 value: 11.501999999999999 - type: recall_at_1 value: 23.211000000000002 - type: recall_at_10 value: 49.851 - type: recall_at_100 value: 77.596 - type: recall_at_1000 value: 93.683 - type: recall_at_3 value: 35.403 - type: recall_at_5 value: 42.485 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: mteb/cqadupstack-wordpress config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 19.384 - type: map_at_10 value: 26.262999999999998 - type: map_at_100 value: 27.409 - type: map_at_1000 value: 27.526 - type: map_at_3 value: 23.698 - type: map_at_5 value: 25.217 - type: mrr_at_1 value: 20.702 - type: mrr_at_10 value: 27.810000000000002 - type: mrr_at_100 value: 28.863 - type: mrr_at_1000 value: 28.955 - type: mrr_at_3 value: 25.230999999999998 - type: mrr_at_5 value: 26.821 - type: ndcg_at_1 value: 20.702 - type: ndcg_at_10 value: 30.688 - type: ndcg_at_100 value: 36.138999999999996 - type: ndcg_at_1000 value: 38.984 - type: ndcg_at_3 value: 25.663000000000004 - type: ndcg_at_5 value: 28.242 - type: precision_at_1 value: 20.702 - type: precision_at_10 value: 4.954 - type: precision_at_100 value: 0.823 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 10.844 - type: precision_at_5 value: 8.096 - type: recall_at_1 value: 19.384 - type: recall_at_10 value: 42.847 - type: recall_at_100 value: 67.402 - type: recall_at_1000 value: 88.145 - type: recall_at_3 value: 29.513 - type: recall_at_5 value: 35.57 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 14.915000000000001 - type: map_at_10 value: 25.846999999999998 - type: map_at_100 value: 27.741 - type: map_at_1000 value: 27.921000000000003 - type: map_at_3 value: 21.718 - type: map_at_5 value: 23.948 - type: mrr_at_1 value: 33.941 - type: mrr_at_10 value: 46.897 - type: mrr_at_100 value: 47.63 - type: mrr_at_1000 value: 47.658 - type: mrr_at_3 value: 43.919999999999995 - type: mrr_at_5 value: 45.783 - type: ndcg_at_1 value: 33.941 - type: ndcg_at_10 value: 35.202 - type: ndcg_at_100 value: 42.132 - type: ndcg_at_1000 value: 45.190999999999995 - type: ndcg_at_3 value: 29.68 - type: ndcg_at_5 value: 31.631999999999998 - type: precision_at_1 value: 33.941 - type: precision_at_10 value: 10.906 - type: precision_at_100 value: 1.8339999999999999 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 22.606 - type: precision_at_5 value: 17.081 - type: recall_at_1 value: 14.915000000000001 - type: recall_at_10 value: 40.737 - type: recall_at_100 value: 64.42 - type: recall_at_1000 value: 81.435 - type: recall_at_3 value: 26.767000000000003 - type: recall_at_5 value: 32.895 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 8.665000000000001 - type: map_at_10 value: 19.087 - type: map_at_100 value: 26.555 - type: map_at_1000 value: 28.105999999999998 - type: map_at_3 value: 13.858999999999998 - type: map_at_5 value: 16.083 - type: mrr_at_1 value: 68.5 - type: mrr_at_10 value: 76.725 - type: mrr_at_100 value: 76.974 - type: mrr_at_1000 value: 76.981 - type: mrr_at_3 value: 75.583 - type: mrr_at_5 value: 76.208 - type: ndcg_at_1 value: 55.875 - type: ndcg_at_10 value: 41.018 - type: ndcg_at_100 value: 44.982 - type: ndcg_at_1000 value: 52.43 - type: ndcg_at_3 value: 46.534 - type: ndcg_at_5 value: 43.083 - type: precision_at_1 value: 68.5 - type: precision_at_10 value: 32.35 - type: precision_at_100 value: 10.078 - type: precision_at_1000 value: 1.957 - type: precision_at_3 value: 50.083 - type: precision_at_5 value: 41.3 - type: recall_at_1 value: 8.665000000000001 - type: recall_at_10 value: 24.596999999999998 - type: recall_at_100 value: 50.612 - type: recall_at_1000 value: 74.24 - type: recall_at_3 value: 15.337 - type: recall_at_5 value: 18.796 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 55.06500000000001 - type: f1 value: 49.827367590822035 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 76.059 - type: map_at_10 value: 83.625 - type: map_at_100 value: 83.845 - type: map_at_1000 value: 83.858 - type: map_at_3 value: 82.67099999999999 - type: map_at_5 value: 83.223 - type: mrr_at_1 value: 82.013 - type: mrr_at_10 value: 88.44800000000001 - type: mrr_at_100 value: 88.535 - type: mrr_at_1000 value: 88.537 - type: mrr_at_3 value: 87.854 - type: mrr_at_5 value: 88.221 - type: ndcg_at_1 value: 82.013 - type: ndcg_at_10 value: 87.128 - type: ndcg_at_100 value: 87.922 - type: ndcg_at_1000 value: 88.166 - type: ndcg_at_3 value: 85.648 - type: ndcg_at_5 value: 86.366 - type: precision_at_1 value: 82.013 - type: precision_at_10 value: 10.32 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 32.408 - type: precision_at_5 value: 19.973 - type: recall_at_1 value: 76.059 - type: recall_at_10 value: 93.229 - type: recall_at_100 value: 96.387 - type: recall_at_1000 value: 97.916 - type: recall_at_3 value: 89.025 - type: recall_at_5 value: 90.96300000000001 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 20.479 - type: map_at_10 value: 33.109 - type: map_at_100 value: 34.803 - type: map_at_1000 value: 35.003 - type: map_at_3 value: 28.967 - type: map_at_5 value: 31.385 - type: mrr_at_1 value: 40.278000000000006 - type: mrr_at_10 value: 48.929 - type: mrr_at_100 value: 49.655 - type: mrr_at_1000 value: 49.691 - type: mrr_at_3 value: 46.605000000000004 - type: mrr_at_5 value: 48.056 - type: ndcg_at_1 value: 40.278000000000006 - type: ndcg_at_10 value: 40.649 - type: ndcg_at_100 value: 47.027 - type: ndcg_at_1000 value: 50.249 - type: ndcg_at_3 value: 37.364000000000004 - type: ndcg_at_5 value: 38.494 - type: precision_at_1 value: 40.278000000000006 - type: precision_at_10 value: 11.327 - type: precision_at_100 value: 1.802 - type: precision_at_1000 value: 0.23700000000000002 - type: precision_at_3 value: 25.102999999999998 - type: precision_at_5 value: 18.457 - type: recall_at_1 value: 20.479 - type: recall_at_10 value: 46.594 - type: recall_at_100 value: 71.101 - type: recall_at_1000 value: 90.31099999999999 - type: recall_at_3 value: 33.378 - type: recall_at_5 value: 39.587 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 36.59 - type: map_at_10 value: 58.178 - type: map_at_100 value: 59.095 - type: map_at_1000 value: 59.16400000000001 - type: map_at_3 value: 54.907 - type: map_at_5 value: 56.89999999999999 - type: mrr_at_1 value: 73.18 - type: mrr_at_10 value: 79.935 - type: mrr_at_100 value: 80.16799999999999 - type: mrr_at_1000 value: 80.17800000000001 - type: mrr_at_3 value: 78.776 - type: mrr_at_5 value: 79.522 - type: ndcg_at_1 value: 73.18 - type: ndcg_at_10 value: 66.538 - type: ndcg_at_100 value: 69.78 - type: ndcg_at_1000 value: 71.102 - type: ndcg_at_3 value: 61.739 - type: ndcg_at_5 value: 64.35600000000001 - type: precision_at_1 value: 73.18 - type: precision_at_10 value: 14.035 - type: precision_at_100 value: 1.657 - type: precision_at_1000 value: 0.183 - type: precision_at_3 value: 39.684999999999995 - type: precision_at_5 value: 25.885 - type: recall_at_1 value: 36.59 - type: recall_at_10 value: 70.176 - type: recall_at_100 value: 82.836 - type: recall_at_1000 value: 91.526 - type: recall_at_3 value: 59.526999999999994 - type: recall_at_5 value: 64.713 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.1472 - type: ap value: 85.73994227076815 - type: f1 value: 90.1271700788608 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 21.689 - type: map_at_10 value: 33.518 - type: map_at_100 value: 34.715 - type: map_at_1000 value: 34.766000000000005 - type: map_at_3 value: 29.781000000000002 - type: map_at_5 value: 31.838 - type: mrr_at_1 value: 22.249 - type: mrr_at_10 value: 34.085 - type: mrr_at_100 value: 35.223 - type: mrr_at_1000 value: 35.266999999999996 - type: mrr_at_3 value: 30.398999999999997 - type: mrr_at_5 value: 32.437 - type: ndcg_at_1 value: 22.249 - type: ndcg_at_10 value: 40.227000000000004 - type: ndcg_at_100 value: 45.961999999999996 - type: ndcg_at_1000 value: 47.248000000000005 - type: ndcg_at_3 value: 32.566 - type: ndcg_at_5 value: 36.229 - type: precision_at_1 value: 22.249 - type: precision_at_10 value: 6.358 - type: precision_at_100 value: 0.923 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 13.83 - type: precision_at_5 value: 10.145999999999999 - type: recall_at_1 value: 21.689 - type: recall_at_10 value: 60.92999999999999 - type: recall_at_100 value: 87.40599999999999 - type: recall_at_1000 value: 97.283 - type: recall_at_3 value: 40.01 - type: recall_at_5 value: 48.776 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 95.28727770177838 - type: f1 value: 95.02577308660041 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.5736434108527 - type: f1 value: 61.2451202054398 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.01210490921318 - type: f1 value: 73.70188053982473 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.33422999327504 - type: f1 value: 79.48369022509658 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.70891567267726 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.15203494451706 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.919517862194173 - type: mrr value: 33.15466289140483 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 5.992 - type: map_at_10 value: 13.197000000000001 - type: map_at_100 value: 16.907 - type: map_at_1000 value: 18.44 - type: map_at_3 value: 9.631 - type: map_at_5 value: 11.243 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 53.321 - type: mrr_at_100 value: 53.903 - type: mrr_at_1000 value: 53.952999999999996 - type: mrr_at_3 value: 51.393 - type: mrr_at_5 value: 52.708999999999996 - type: ndcg_at_1 value: 42.415000000000006 - type: ndcg_at_10 value: 34.921 - type: ndcg_at_100 value: 32.384 - type: ndcg_at_1000 value: 41.260000000000005 - type: ndcg_at_3 value: 40.186 - type: ndcg_at_5 value: 37.89 - type: precision_at_1 value: 44.272 - type: precision_at_10 value: 26.006 - type: precision_at_100 value: 8.44 - type: precision_at_1000 value: 2.136 - type: precision_at_3 value: 37.977 - type: precision_at_5 value: 32.755 - type: recall_at_1 value: 5.992 - type: recall_at_10 value: 17.01 - type: recall_at_100 value: 33.080999999999996 - type: recall_at_1000 value: 65.054 - type: recall_at_3 value: 10.528 - type: recall_at_5 value: 13.233 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 28.871999999999996 - type: map_at_10 value: 43.286 - type: map_at_100 value: 44.432 - type: map_at_1000 value: 44.464999999999996 - type: map_at_3 value: 38.856 - type: map_at_5 value: 41.514 - type: mrr_at_1 value: 32.619 - type: mrr_at_10 value: 45.75 - type: mrr_at_100 value: 46.622 - type: mrr_at_1000 value: 46.646 - type: mrr_at_3 value: 41.985 - type: mrr_at_5 value: 44.277 - type: ndcg_at_1 value: 32.59 - type: ndcg_at_10 value: 50.895999999999994 - type: ndcg_at_100 value: 55.711999999999996 - type: ndcg_at_1000 value: 56.48800000000001 - type: ndcg_at_3 value: 42.504999999999995 - type: ndcg_at_5 value: 46.969 - type: precision_at_1 value: 32.59 - type: precision_at_10 value: 8.543000000000001 - type: precision_at_100 value: 1.123 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 19.448 - type: precision_at_5 value: 14.218 - type: recall_at_1 value: 28.871999999999996 - type: recall_at_10 value: 71.748 - type: recall_at_100 value: 92.55499999999999 - type: recall_at_1000 value: 98.327 - type: recall_at_3 value: 49.944 - type: recall_at_5 value: 60.291 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: map_at_1 value: 70.664 - type: map_at_10 value: 84.681 - type: map_at_100 value: 85.289 - type: map_at_1000 value: 85.306 - type: map_at_3 value: 81.719 - type: map_at_5 value: 83.601 - type: mrr_at_1 value: 81.35 - type: mrr_at_10 value: 87.591 - type: mrr_at_100 value: 87.691 - type: mrr_at_1000 value: 87.693 - type: mrr_at_3 value: 86.675 - type: mrr_at_5 value: 87.29299999999999 - type: ndcg_at_1 value: 81.33 - type: ndcg_at_10 value: 88.411 - type: ndcg_at_100 value: 89.579 - type: ndcg_at_1000 value: 89.687 - type: ndcg_at_3 value: 85.613 - type: ndcg_at_5 value: 87.17 - type: precision_at_1 value: 81.33 - type: precision_at_10 value: 13.422 - type: precision_at_100 value: 1.5270000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.463 - type: precision_at_5 value: 24.646 - type: recall_at_1 value: 70.664 - type: recall_at_10 value: 95.54 - type: recall_at_100 value: 99.496 - type: recall_at_1000 value: 99.978 - type: recall_at_3 value: 87.481 - type: recall_at_5 value: 91.88499999999999 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 55.40341814991112 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 61.231318481346655 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: map_at_1 value: 4.833 - type: map_at_10 value: 13.149 - type: map_at_100 value: 15.578 - type: map_at_1000 value: 15.963 - type: map_at_3 value: 9.269 - type: map_at_5 value: 11.182 - type: mrr_at_1 value: 23.9 - type: mrr_at_10 value: 35.978 - type: mrr_at_100 value: 37.076 - type: mrr_at_1000 value: 37.126 - type: mrr_at_3 value: 32.333 - type: mrr_at_5 value: 34.413 - type: ndcg_at_1 value: 23.9 - type: ndcg_at_10 value: 21.823 - type: ndcg_at_100 value: 30.833 - type: ndcg_at_1000 value: 36.991 - type: ndcg_at_3 value: 20.465 - type: ndcg_at_5 value: 17.965999999999998 - type: precision_at_1 value: 23.9 - type: precision_at_10 value: 11.49 - type: precision_at_100 value: 2.444 - type: precision_at_1000 value: 0.392 - type: precision_at_3 value: 19.3 - type: precision_at_5 value: 15.959999999999999 - type: recall_at_1 value: 4.833 - type: recall_at_10 value: 23.294999999999998 - type: recall_at_100 value: 49.63 - type: recall_at_1000 value: 79.49199999999999 - type: recall_at_3 value: 11.732 - type: recall_at_5 value: 16.167 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 85.62938108735759 - type: cos_sim_spearman value: 80.30777094408789 - type: euclidean_pearson value: 82.94516686659536 - type: euclidean_spearman value: 80.34489663248169 - type: manhattan_pearson value: 82.85830094736245 - type: manhattan_spearman value: 80.24902623215449 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 85.23777464247604 - type: cos_sim_spearman value: 75.75714864112797 - type: euclidean_pearson value: 82.33806918604493 - type: euclidean_spearman value: 75.45282124387357 - type: manhattan_pearson value: 82.32555620660538 - type: manhattan_spearman value: 75.49228731684082 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.88151620954451 - type: cos_sim_spearman value: 86.08377598473446 - type: euclidean_pearson value: 85.36958329369413 - type: euclidean_spearman value: 86.10274219670679 - type: manhattan_pearson value: 85.25873897594711 - type: manhattan_spearman value: 85.98096461661584 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 84.29360558735978 - type: cos_sim_spearman value: 82.28284203795577 - type: euclidean_pearson value: 83.81636655536633 - type: euclidean_spearman value: 82.24340438530236 - type: manhattan_pearson value: 83.83914453428608 - type: manhattan_spearman value: 82.28391354080694 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.47344180426744 - type: cos_sim_spearman value: 88.90045649789438 - type: euclidean_pearson value: 88.43020815961273 - type: euclidean_spearman value: 89.0087449011776 - type: manhattan_pearson value: 88.37601826505525 - type: manhattan_spearman value: 88.96756360690617 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.35997025304613 - type: cos_sim_spearman value: 85.18237675717147 - type: euclidean_pearson value: 84.46478196990202 - type: euclidean_spearman value: 85.27748677712205 - type: manhattan_pearson value: 84.29342543953123 - type: manhattan_spearman value: 85.10579612516567 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.56668329596836 - type: cos_sim_spearman value: 88.72837234129177 - type: euclidean_pearson value: 89.39395650897828 - type: euclidean_spearman value: 88.82001247906778 - type: manhattan_pearson value: 89.41735354368878 - type: manhattan_spearman value: 88.95159141850039 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 67.466167902991 - type: cos_sim_spearman value: 68.54466147197274 - type: euclidean_pearson value: 69.35551179564695 - type: euclidean_spearman value: 68.75455717749132 - type: manhattan_pearson value: 69.42432368208264 - type: manhattan_spearman value: 68.83203709670562 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.33241300373689 - type: cos_sim_spearman value: 86.97909372129874 - type: euclidean_pearson value: 86.99526113559924 - type: euclidean_spearman value: 87.02644372623219 - type: manhattan_pearson value: 86.78744182759846 - type: manhattan_spearman value: 86.8886180198196 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 86.18374413668717 - type: mrr value: 95.93213068703264 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 58.31699999999999 - type: map_at_10 value: 67.691 - type: map_at_100 value: 68.201 - type: map_at_1000 value: 68.232 - type: map_at_3 value: 64.47800000000001 - type: map_at_5 value: 66.51 - type: mrr_at_1 value: 61.0 - type: mrr_at_10 value: 68.621 - type: mrr_at_100 value: 68.973 - type: mrr_at_1000 value: 69.002 - type: mrr_at_3 value: 66.111 - type: mrr_at_5 value: 67.578 - type: ndcg_at_1 value: 61.0 - type: ndcg_at_10 value: 72.219 - type: ndcg_at_100 value: 74.397 - type: ndcg_at_1000 value: 75.021 - type: ndcg_at_3 value: 66.747 - type: ndcg_at_5 value: 69.609 - type: precision_at_1 value: 61.0 - type: precision_at_10 value: 9.6 - type: precision_at_100 value: 1.08 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.267 - type: recall_at_1 value: 58.31699999999999 - type: recall_at_10 value: 85.233 - type: recall_at_100 value: 95.167 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.589 - type: recall_at_5 value: 77.628 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.83267326732673 - type: cos_sim_ap value: 96.13707107038228 - type: cos_sim_f1 value: 91.48830263812842 - type: cos_sim_precision value: 91.0802775024777 - type: cos_sim_recall value: 91.9 - type: dot_accuracy value: 99.83069306930693 - type: dot_ap value: 96.21199069147254 - type: dot_f1 value: 91.36295556665004 - type: dot_precision value: 91.22632103688933 - type: dot_recall value: 91.5 - type: euclidean_accuracy value: 99.83267326732673 - type: euclidean_ap value: 96.08957801367436 - type: euclidean_f1 value: 91.33004926108374 - type: euclidean_precision value: 90.0 - type: euclidean_recall value: 92.7 - type: manhattan_accuracy value: 99.83564356435643 - type: manhattan_ap value: 96.10534946461945 - type: manhattan_f1 value: 91.74950298210736 - type: manhattan_precision value: 91.20553359683794 - type: manhattan_recall value: 92.30000000000001 - type: max_accuracy value: 99.83564356435643 - type: max_ap value: 96.21199069147254 - type: max_f1 value: 91.74950298210736 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 62.045718843534736 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.6501777041092 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.963913408053955 - type: mrr value: 53.87972423818012 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.44195730764998 - type: cos_sim_spearman value: 30.59626288679397 - type: dot_pearson value: 30.22974492404086 - type: dot_spearman value: 29.345245972906497 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: map_at_1 value: 0.24 - type: map_at_10 value: 2.01 - type: map_at_100 value: 11.928999999999998 - type: map_at_1000 value: 29.034 - type: map_at_3 value: 0.679 - type: map_at_5 value: 1.064 - type: mrr_at_1 value: 92.0 - type: mrr_at_10 value: 96.0 - type: mrr_at_100 value: 96.0 - type: mrr_at_1000 value: 96.0 - type: mrr_at_3 value: 96.0 - type: mrr_at_5 value: 96.0 - type: ndcg_at_1 value: 87.0 - type: ndcg_at_10 value: 80.118 - type: ndcg_at_100 value: 60.753 - type: ndcg_at_1000 value: 54.632999999999996 - type: ndcg_at_3 value: 83.073 - type: ndcg_at_5 value: 80.733 - type: precision_at_1 value: 92.0 - type: precision_at_10 value: 84.8 - type: precision_at_100 value: 62.019999999999996 - type: precision_at_1000 value: 24.028 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 85.2 - type: recall_at_1 value: 0.24 - type: recall_at_10 value: 2.205 - type: recall_at_100 value: 15.068000000000001 - type: recall_at_1000 value: 51.796 - type: recall_at_3 value: 0.698 - type: recall_at_5 value: 1.1199999999999999 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 3.066 - type: map_at_10 value: 9.219 - type: map_at_100 value: 15.387 - type: map_at_1000 value: 16.957 - type: map_at_3 value: 5.146 - type: map_at_5 value: 6.6739999999999995 - type: mrr_at_1 value: 40.816 - type: mrr_at_10 value: 50.844 - type: mrr_at_100 value: 51.664 - type: mrr_at_1000 value: 51.664 - type: mrr_at_3 value: 46.259 - type: mrr_at_5 value: 49.116 - type: ndcg_at_1 value: 37.755 - type: ndcg_at_10 value: 23.477 - type: ndcg_at_100 value: 36.268 - type: ndcg_at_1000 value: 47.946 - type: ndcg_at_3 value: 25.832 - type: ndcg_at_5 value: 24.235 - type: precision_at_1 value: 40.816 - type: precision_at_10 value: 20.204 - type: precision_at_100 value: 7.611999999999999 - type: precision_at_1000 value: 1.543 - type: precision_at_3 value: 25.169999999999998 - type: precision_at_5 value: 23.265 - type: recall_at_1 value: 3.066 - type: recall_at_10 value: 14.985999999999999 - type: recall_at_100 value: 47.902 - type: recall_at_1000 value: 83.56400000000001 - type: recall_at_3 value: 5.755 - type: recall_at_5 value: 8.741999999999999 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 69.437 - type: ap value: 12.844066827082706 - type: f1 value: 52.74974809872495 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.26768534238823 - type: f1 value: 61.65100187399282 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 49.860968711078804 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.7423854085951 - type: cos_sim_ap value: 73.47560303339571 - type: cos_sim_f1 value: 67.372778183589 - type: cos_sim_precision value: 62.54520795660036 - type: cos_sim_recall value: 73.00791556728232 - type: dot_accuracy value: 85.36091077069798 - type: dot_ap value: 72.42521572307255 - type: dot_f1 value: 66.90576304724215 - type: dot_precision value: 62.96554934823091 - type: dot_recall value: 71.37203166226914 - type: euclidean_accuracy value: 85.76026703224653 - type: euclidean_ap value: 73.44852563860128 - type: euclidean_f1 value: 67.3 - type: euclidean_precision value: 63.94299287410926 - type: euclidean_recall value: 71.02902374670185 - type: manhattan_accuracy value: 85.7423854085951 - type: manhattan_ap value: 73.2635034755551 - type: manhattan_f1 value: 67.3180263800684 - type: manhattan_precision value: 62.66484765802638 - type: manhattan_recall value: 72.71767810026385 - type: max_accuracy value: 85.76026703224653 - type: max_ap value: 73.47560303339571 - type: max_f1 value: 67.372778183589 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.67543757519307 - type: cos_sim_ap value: 85.35516518531304 - type: cos_sim_f1 value: 77.58197635511934 - type: cos_sim_precision value: 75.01078360891445 - type: cos_sim_recall value: 80.33569448721897 - type: dot_accuracy value: 87.61400240617844 - type: dot_ap value: 83.0774968268665 - type: dot_f1 value: 75.68229012162561 - type: dot_precision value: 72.99713876967095 - type: dot_recall value: 78.57252848783493 - type: euclidean_accuracy value: 88.73753250281368 - type: euclidean_ap value: 85.48043564821317 - type: euclidean_f1 value: 77.75975862719216 - type: euclidean_precision value: 76.21054187920456 - type: euclidean_recall value: 79.37326763166 - type: manhattan_accuracy value: 88.75111576823068 - type: manhattan_ap value: 85.44993439423668 - type: manhattan_f1 value: 77.6861329994845 - type: manhattan_precision value: 74.44601270289344 - type: manhattan_recall value: 81.22112719433323 - type: max_accuracy value: 88.75111576823068 - type: max_ap value: 85.48043564821317 - type: max_f1 value: 77.75975862719216 --- # twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF This model was converted to GGUF format from [`avsolatorio/NoInstruct-small-Embedding-v0`](https://huggingface.co/avsolatorio/NoInstruct-small-Embedding-v0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/avsolatorio/NoInstruct-small-Embedding-v0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF --hf-file noinstruct-small-embedding-v0-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF --hf-file noinstruct-small-embedding-v0-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF --hf-file noinstruct-small-embedding-v0-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF --hf-file noinstruct-small-embedding-v0-q8_0.gguf -c 2048 ```
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
# twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF This model was converted to GGUF format from [`avsolatorio/NoInstruct-small-Embedding-v0`](https://huggingface.co/avsolatorio/NoInstruct-small-Embedding-v0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/avsolatorio/NoInstruct-small-Embedding-v0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF --hf-file noinstruct-small-embedding-v0-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF --hf-file noinstruct-small-embedding-v0-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF --hf-file noinstruct-small-embedding-v0-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo twine-network/NoInstruct-small-Embedding-v0-Q8_0-GGUF --hf-file noinstruct-small-embedding-v0-q8_0.gguf -c 2048 ```
{"base_model": "avsolatorio/NoInstruct-small-Embedding-v0", "language": ["en"], "library_name": "sentence-transformers", "license": "mit", "pipeline_tag": "sentence-similarity", "tags": ["feature-extraction", "mteb", "sentence-similarity", "sentence-transformers", "transformers", "llama-cpp", "gguf-my-repo"], "model-index": [{"name": "NoInstruct-small-Embedding-v0", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 75.76119402985074}, {"type": "ap", "value": 39.03628777559392}, {"type": "f1", "value": 69.85860402259618}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 93.29920000000001}, {"type": "ap", "value": 90.03479490717608}, {"type": "f1", "value": 93.28554395248467}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 49.98799999999999}, {"type": "f1", "value": 49.46151232451642}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "mteb/arguana", "config": "default", "split": "test", "revision": "c22ab2a51041ffd869aaddef7af8d8215647e41a"}, "metrics": [{"type": "map_at_1", "value": 31.935000000000002}, {"type": "map_at_10", "value": 48.791000000000004}, {"type": "map_at_100", "value": 49.619}, {"type": "map_at_1000", "value": 49.623}, {"type": "map_at_3", "value": 44.334}, {"type": "map_at_5", "value": 46.908}, {"type": "mrr_at_1", "value": 32.93}, {"type": "mrr_at_10", "value": 49.158}, {"type": "mrr_at_100", "value": 50.00599999999999}, {"type": "mrr_at_1000", "value": 50.01}, {"type": "mrr_at_3", "value": 44.618}, {"type": "mrr_at_5", "value": 47.325}, {"type": "ndcg_at_1", "value": 31.935000000000002}, {"type": "ndcg_at_10", "value": 57.593}, {"type": "ndcg_at_100", "value": 60.841}, {"type": "ndcg_at_1000", "value": 60.924}, {"type": "ndcg_at_3", "value": 48.416}, {"type": "ndcg_at_5", "value": 53.05}, {"type": "precision_at_1", "value": 31.935000000000002}, {"type": "precision_at_10", "value": 8.549}, {"type": "precision_at_100", "value": 0.9900000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 20.081}, {"type": "precision_at_5", "value": 14.296000000000001}, {"type": "recall_at_1", "value": 31.935000000000002}, {"type": "recall_at_10", "value": 85.491}, {"type": "recall_at_100", "value": 99.004}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 60.242}, {"type": "recall_at_5", "value": 71.479}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 47.78438534940855}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 40.12916178519471}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 62.125361608299855}, {"type": "mrr", "value": 74.92525172580574}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.64322910336641}, {"type": "cos_sim_spearman", "value": 87.20138453306345}, {"type": "euclidean_pearson", "value": 87.08547818178234}, {"type": "euclidean_spearman", "value": 87.17066094143931}, {"type": "manhattan_pearson", "value": 87.30053110771618}, {"type": "manhattan_spearman", "value": 86.86824441211934}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 86.3961038961039}, {"type": "f1", "value": 86.3669961645295}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 39.40291404289857}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 35.102356817746816}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "mteb/cqadupstack-android", "config": "default", "split": "test", "revision": "f46a197baaae43b4f621051089b82a364682dfeb"}, "metrics": [{"type": "map_at_1", "value": 31.013}, {"type": "map_at_10", "value": 42.681999999999995}, {"type": "map_at_100", "value": 44.24}, {"type": "map_at_1000", "value": 44.372}, {"type": "map_at_3", "value": 39.181}, {"type": "map_at_5", "value": 41.071999999999996}, {"type": "mrr_at_1", "value": 38.196999999999996}, {"type": "mrr_at_10", "value": 48.604}, {"type": "mrr_at_100", "value": 49.315}, {"type": "mrr_at_1000", "value": 49.363}, {"type": "mrr_at_3", "value": 45.756}, {"type": "mrr_at_5", "value": 47.43}, {"type": "ndcg_at_1", "value": 38.196999999999996}, {"type": "ndcg_at_10", "value": 49.344}, {"type": "ndcg_at_100", "value": 54.662}, {"type": "ndcg_at_1000", "value": 56.665}, {"type": "ndcg_at_3", "value": 44.146}, {"type": "ndcg_at_5", "value": 46.514}, {"type": "precision_at_1", "value": 38.196999999999996}, {"type": "precision_at_10", "value": 9.571}, {"type": "precision_at_100", "value": 1.542}, {"type": "precision_at_1000", "value": 0.202}, {"type": "precision_at_3", "value": 21.364}, {"type": "precision_at_5", "value": 15.336}, {"type": "recall_at_1", "value": 31.013}, {"type": "recall_at_10", "value": 61.934999999999995}, {"type": "recall_at_100", "value": 83.923}, {"type": "recall_at_1000", "value": 96.601}, {"type": "recall_at_3", "value": 46.86}, {"type": "recall_at_5", "value": 53.620000000000005}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackEnglishRetrieval", "type": "mteb/cqadupstack-english", "config": "default", "split": "test", "revision": "ad9991cb51e31e31e430383c75ffb2885547b5f0"}, "metrics": [{"type": "map_at_1", "value": 29.84}, {"type": "map_at_10", "value": 39.335}, {"type": "map_at_100", "value": 40.647}, {"type": "map_at_1000", "value": 40.778}, {"type": "map_at_3", "value": 36.556}, {"type": "map_at_5", "value": 38.048}, {"type": "mrr_at_1", "value": 36.815}, {"type": "mrr_at_10", "value": 45.175}, {"type": "mrr_at_100", "value": 45.907}, {"type": "mrr_at_1000", "value": 45.946999999999996}, {"type": "mrr_at_3", "value": 42.909000000000006}, {"type": "mrr_at_5", "value": 44.227}, {"type": "ndcg_at_1", "value": 36.815}, {"type": "ndcg_at_10", "value": 44.783}, {"type": "ndcg_at_100", "value": 49.551}, {"type": "ndcg_at_1000", "value": 51.612}, {"type": "ndcg_at_3", "value": 40.697}, {"type": "ndcg_at_5", "value": 42.558}, {"type": "precision_at_1", "value": 36.815}, {"type": "precision_at_10", "value": 8.363}, {"type": "precision_at_100", "value": 1.385}, {"type": "precision_at_1000", "value": 0.186}, {"type": "precision_at_3", "value": 19.342000000000002}, {"type": "precision_at_5", "value": 13.706999999999999}, {"type": "recall_at_1", "value": 29.84}, {"type": "recall_at_10", "value": 54.164}, {"type": "recall_at_100", "value": 74.36}, {"type": "recall_at_1000", "value": 87.484}, {"type": "recall_at_3", "value": 42.306}, {"type": "recall_at_5", "value": 47.371}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGamingRetrieval", "type": "mteb/cqadupstack-gaming", "config": "default", "split": "test", "revision": "4885aa143210c98657558c04aaf3dc47cfb54340"}, "metrics": [{"type": "map_at_1", "value": 39.231}, {"type": "map_at_10", "value": 51.44800000000001}, {"type": "map_at_100", "value": 52.574}, {"type": "map_at_1000", "value": 52.629999999999995}, {"type": "map_at_3", "value": 48.077}, {"type": "map_at_5", "value": 50.019000000000005}, {"type": "mrr_at_1", "value": 44.89}, {"type": "mrr_at_10", "value": 54.803000000000004}, {"type": "mrr_at_100", "value": 55.556000000000004}, {"type": "mrr_at_1000", "value": 55.584}, {"type": "mrr_at_3", "value": 52.32}, {"type": "mrr_at_5", "value": 53.846000000000004}, {"type": "ndcg_at_1", "value": 44.89}, {"type": "ndcg_at_10", "value": 57.228}, {"type": "ndcg_at_100", "value": 61.57}, {"type": "ndcg_at_1000", "value": 62.613}, {"type": "ndcg_at_3", "value": 51.727000000000004}, {"type": "ndcg_at_5", "value": 54.496}, {"type": "precision_at_1", "value": 44.89}, {"type": "precision_at_10", "value": 9.266}, {"type": "precision_at_100", "value": 1.2309999999999999}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 23.051}, {"type": "precision_at_5", "value": 15.987000000000002}, {"type": "recall_at_1", "value": 39.231}, {"type": "recall_at_10", "value": 70.82000000000001}, {"type": "recall_at_100", "value": 89.446}, {"type": "recall_at_1000", "value": 96.665}, {"type": "recall_at_3", "value": 56.40500000000001}, {"type": "recall_at_5", "value": 62.993}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGisRetrieval", "type": "mteb/cqadupstack-gis", "config": "default", "split": "test", "revision": "5003b3064772da1887988e05400cf3806fe491f2"}, "metrics": [{"type": "map_at_1", "value": 25.296000000000003}, {"type": "map_at_10", "value": 34.021}, {"type": "map_at_100", "value": 35.158}, {"type": "map_at_1000", "value": 35.233}, {"type": "map_at_3", "value": 31.424999999999997}, {"type": "map_at_5", "value": 33.046}, {"type": "mrr_at_1", "value": 27.232}, {"type": "mrr_at_10", "value": 36.103}, {"type": "mrr_at_100", "value": 37.076}, {"type": "mrr_at_1000", "value": 37.135}, {"type": "mrr_at_3", "value": 33.635}, {"type": "mrr_at_5", "value": 35.211}, {"type": "ndcg_at_1", "value": 27.232}, {"type": "ndcg_at_10", "value": 38.878}, {"type": "ndcg_at_100", "value": 44.284}, {"type": "ndcg_at_1000", "value": 46.268}, {"type": "ndcg_at_3", "value": 33.94}, {"type": "ndcg_at_5", "value": 36.687}, {"type": "precision_at_1", "value": 27.232}, {"type": "precision_at_10", "value": 5.921}, {"type": "precision_at_100", "value": 0.907}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 14.426}, {"type": "precision_at_5", "value": 10.215}, {"type": "recall_at_1", "value": 25.296000000000003}, {"type": "recall_at_10", "value": 51.708}, {"type": "recall_at_100", "value": 76.36699999999999}, {"type": "recall_at_1000", "value": 91.306}, {"type": "recall_at_3", "value": 38.651}, {"type": "recall_at_5", "value": 45.201}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackMathematicaRetrieval", "type": "mteb/cqadupstack-mathematica", "config": "default", "split": "test", "revision": "90fceea13679c63fe563ded68f3b6f06e50061de"}, "metrics": [{"type": "map_at_1", "value": 16.24}, {"type": "map_at_10", "value": 24.696}, {"type": "map_at_100", "value": 25.945}, {"type": "map_at_1000", "value": 26.069}, {"type": "map_at_3", "value": 22.542}, {"type": "map_at_5", "value": 23.526}, {"type": "mrr_at_1", "value": 20.149}, {"type": "mrr_at_10", "value": 29.584}, {"type": "mrr_at_100", "value": 30.548}, {"type": "mrr_at_1000", "value": 30.618000000000002}, {"type": "mrr_at_3", "value": 27.301}, {"type": "mrr_at_5", "value": 28.563}, {"type": "ndcg_at_1", "value": 20.149}, {"type": "ndcg_at_10", "value": 30.029}, {"type": "ndcg_at_100", "value": 35.812}, {"type": "ndcg_at_1000", "value": 38.755}, {"type": "ndcg_at_3", "value": 26.008}, {"type": "ndcg_at_5", "value": 27.517000000000003}, {"type": "precision_at_1", "value": 20.149}, {"type": "precision_at_10", "value": 5.647}, {"type": "precision_at_100", "value": 0.968}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 12.934999999999999}, {"type": "precision_at_5", "value": 8.955}, {"type": "recall_at_1", "value": 16.24}, {"type": "recall_at_10", "value": 41.464}, {"type": "recall_at_100", "value": 66.781}, {"type": "recall_at_1000", "value": 87.85300000000001}, {"type": "recall_at_3", "value": 29.822}, {"type": "recall_at_5", "value": 34.096}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackPhysicsRetrieval", "type": "mteb/cqadupstack-physics", "config": "default", "split": "test", "revision": "79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4"}, "metrics": [{"type": "map_at_1", "value": 29.044999999999998}, {"type": "map_at_10", "value": 39.568999999999996}, {"type": "map_at_100", "value": 40.831}, {"type": "map_at_1000", "value": 40.948}, {"type": "map_at_3", "value": 36.495}, {"type": "map_at_5", "value": 38.21}, {"type": "mrr_at_1", "value": 35.611}, {"type": "mrr_at_10", "value": 45.175}, {"type": "mrr_at_100", "value": 45.974}, {"type": "mrr_at_1000", "value": 46.025}, {"type": "mrr_at_3", "value": 42.765}, {"type": "mrr_at_5", "value": 44.151}, {"type": "ndcg_at_1", "value": 35.611}, {"type": "ndcg_at_10", "value": 45.556999999999995}, {"type": "ndcg_at_100", "value": 50.86000000000001}, {"type": "ndcg_at_1000", "value": 52.983000000000004}, {"type": "ndcg_at_3", "value": 40.881}, {"type": "ndcg_at_5", "value": 43.035000000000004}, {"type": "precision_at_1", "value": 35.611}, {"type": "precision_at_10", "value": 8.306}, {"type": "precision_at_100", "value": 1.276}, {"type": "precision_at_1000", "value": 0.165}, {"type": "precision_at_3", "value": 19.57}, {"type": "precision_at_5", "value": 13.725000000000001}, {"type": "recall_at_1", "value": 29.044999999999998}, {"type": "recall_at_10", "value": 57.513999999999996}, {"type": "recall_at_100", "value": 80.152}, {"type": "recall_at_1000", "value": 93.982}, {"type": "recall_at_3", "value": 44.121}, {"type": "recall_at_5", "value": 50.007000000000005}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackProgrammersRetrieval", "type": "mteb/cqadupstack-programmers", "config": "default", "split": "test", "revision": "6184bc1440d2dbc7612be22b50686b8826d22b32"}, "metrics": [{"type": "map_at_1", "value": 22.349}, {"type": "map_at_10", "value": 33.434000000000005}, {"type": "map_at_100", "value": 34.8}, {"type": "map_at_1000", "value": 34.919}, {"type": "map_at_3", "value": 30.348000000000003}, {"type": "map_at_5", "value": 31.917}, {"type": "mrr_at_1", "value": 28.195999999999998}, {"type": "mrr_at_10", "value": 38.557}, {"type": "mrr_at_100", "value": 39.550999999999995}, {"type": "mrr_at_1000", "value": 39.607}, {"type": "mrr_at_3", "value": 36.035000000000004}, {"type": "mrr_at_5", "value": 37.364999999999995}, {"type": "ndcg_at_1", "value": 28.195999999999998}, {"type": "ndcg_at_10", "value": 39.656000000000006}, {"type": "ndcg_at_100", "value": 45.507999999999996}, {"type": "ndcg_at_1000", "value": 47.848}, {"type": "ndcg_at_3", "value": 34.609}, {"type": "ndcg_at_5", "value": 36.65}, {"type": "precision_at_1", "value": 28.195999999999998}, {"type": "precision_at_10", "value": 7.534000000000001}, {"type": "precision_at_100", "value": 1.217}, {"type": "precision_at_1000", "value": 0.158}, {"type": "precision_at_3", "value": 17.085}, {"type": "precision_at_5", "value": 12.169}, {"type": "recall_at_1", "value": 22.349}, {"type": "recall_at_10", "value": 53.127}, {"type": "recall_at_100", "value": 77.884}, {"type": "recall_at_1000", "value": 93.705}, {"type": "recall_at_3", "value": 38.611000000000004}, {"type": "recall_at_5", "value": 44.182}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "mteb/cqadupstack", "config": "default", "split": "test", "revision": "4ffe81d471b1924886b33c7567bfb200e9eec5c4"}, "metrics": [{"type": "map_at_1", "value": 25.215749999999996}, {"type": "map_at_10", "value": 34.332750000000004}, {"type": "map_at_100", "value": 35.58683333333333}, {"type": "map_at_1000", "value": 35.70458333333333}, {"type": "map_at_3", "value": 31.55441666666667}, {"type": "map_at_5", "value": 33.100833333333334}, {"type": "mrr_at_1", "value": 29.697250000000004}, {"type": "mrr_at_10", "value": 38.372249999999994}, {"type": "mrr_at_100", "value": 39.26708333333334}, {"type": "mrr_at_1000", "value": 39.3265}, {"type": "mrr_at_3", "value": 35.946083333333334}, {"type": "mrr_at_5", "value": 37.336999999999996}, {"type": "ndcg_at_1", "value": 29.697250000000004}, {"type": "ndcg_at_10", "value": 39.64575}, {"type": "ndcg_at_100", "value": 44.996833333333335}, {"type": "ndcg_at_1000", "value": 47.314499999999995}, {"type": "ndcg_at_3", "value": 34.93383333333334}, {"type": "ndcg_at_5", "value": 37.15291666666667}, {"type": "precision_at_1", "value": 29.697250000000004}, {"type": "precision_at_10", "value": 6.98825}, {"type": "precision_at_100", "value": 1.138}, {"type": "precision_at_1000", "value": 0.15283333333333332}, {"type": "precision_at_3", "value": 16.115583333333333}, {"type": "precision_at_5", "value": 11.460916666666666}, {"type": "recall_at_1", "value": 25.215749999999996}, {"type": "recall_at_10", "value": 51.261250000000004}, {"type": "recall_at_100", "value": 74.67258333333334}, {"type": "recall_at_1000", "value": 90.72033333333334}, {"type": "recall_at_3", "value": 38.1795}, {"type": "recall_at_5", "value": 43.90658333333334}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackStatsRetrieval", "type": "mteb/cqadupstack-stats", "config": "default", "split": "test", "revision": "65ac3a16b8e91f9cee4c9828cc7c335575432a2a"}, "metrics": [{"type": "map_at_1", "value": 24.352}, {"type": "map_at_10", "value": 30.576999999999998}, {"type": "map_at_100", "value": 31.545}, {"type": "map_at_1000", "value": 31.642}, {"type": "map_at_3", "value": 28.605000000000004}, {"type": "map_at_5", "value": 29.828}, {"type": "mrr_at_1", "value": 26.994}, {"type": "mrr_at_10", "value": 33.151}, {"type": "mrr_at_100", "value": 33.973}, {"type": "mrr_at_1000", "value": 34.044999999999995}, {"type": "mrr_at_3", "value": 31.135}, {"type": "mrr_at_5", "value": 32.262}, {"type": "ndcg_at_1", "value": 26.994}, {"type": "ndcg_at_10", "value": 34.307}, {"type": "ndcg_at_100", "value": 39.079}, {"type": "ndcg_at_1000", "value": 41.548}, {"type": "ndcg_at_3", "value": 30.581000000000003}, {"type": "ndcg_at_5", "value": 32.541}, {"type": "precision_at_1", "value": 26.994}, {"type": "precision_at_10", "value": 5.244999999999999}, {"type": "precision_at_100", "value": 0.831}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 12.781}, {"type": "precision_at_5", "value": 9.017999999999999}, {"type": "recall_at_1", "value": 24.352}, {"type": "recall_at_10", "value": 43.126999999999995}, {"type": "recall_at_100", "value": 64.845}, {"type": "recall_at_1000", "value": 83.244}, {"type": "recall_at_3", "value": 33.308}, {"type": "recall_at_5", "value": 37.984}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackTexRetrieval", "type": "mteb/cqadupstack-tex", "config": "default", "split": "test", "revision": "46989137a86843e03a6195de44b09deda022eec7"}, "metrics": [{"type": "map_at_1", "value": 16.592000000000002}, {"type": "map_at_10", "value": 23.29}, {"type": "map_at_100", "value": 24.423000000000002}, {"type": "map_at_1000", "value": 24.554000000000002}, {"type": "map_at_3", "value": 20.958}, {"type": "map_at_5", "value": 22.267}, {"type": "mrr_at_1", "value": 20.061999999999998}, {"type": "mrr_at_10", "value": 26.973999999999997}, {"type": "mrr_at_100", "value": 27.944999999999997}, {"type": "mrr_at_1000", "value": 28.023999999999997}, {"type": "mrr_at_3", "value": 24.839}, {"type": "mrr_at_5", "value": 26.033}, {"type": "ndcg_at_1", "value": 20.061999999999998}, {"type": "ndcg_at_10", "value": 27.682000000000002}, {"type": "ndcg_at_100", "value": 33.196}, {"type": "ndcg_at_1000", "value": 36.246}, {"type": "ndcg_at_3", "value": 23.559}, {"type": "ndcg_at_5", "value": 25.507}, {"type": "precision_at_1", "value": 20.061999999999998}, {"type": "precision_at_10", "value": 5.086}, {"type": "precision_at_100", "value": 0.9249999999999999}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 11.046}, {"type": "precision_at_5", "value": 8.149000000000001}, {"type": "recall_at_1", "value": 16.592000000000002}, {"type": "recall_at_10", "value": 37.181999999999995}, {"type": "recall_at_100", "value": 62.224999999999994}, {"type": "recall_at_1000", "value": 84.072}, {"type": "recall_at_3", "value": 25.776}, {"type": "recall_at_5", "value": 30.680000000000003}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackUnixRetrieval", "type": "mteb/cqadupstack-unix", "config": "default", "split": "test", "revision": "6c6430d3a6d36f8d2a829195bc5dc94d7e063e53"}, "metrics": [{"type": "map_at_1", "value": 26.035999999999998}, {"type": "map_at_10", "value": 34.447}, {"type": "map_at_100", "value": 35.697}, {"type": "map_at_1000", "value": 35.802}, {"type": "map_at_3", "value": 31.64}, {"type": "map_at_5", "value": 33.056999999999995}, {"type": "mrr_at_1", "value": 29.851}, {"type": "mrr_at_10", "value": 38.143}, {"type": "mrr_at_100", "value": 39.113}, {"type": "mrr_at_1000", "value": 39.175}, {"type": "mrr_at_3", "value": 35.665}, {"type": "mrr_at_5", "value": 36.901}, {"type": "ndcg_at_1", "value": 29.851}, {"type": "ndcg_at_10", "value": 39.554}, {"type": "ndcg_at_100", "value": 45.091}, {"type": "ndcg_at_1000", "value": 47.504000000000005}, {"type": "ndcg_at_3", "value": 34.414}, {"type": "ndcg_at_5", "value": 36.508}, {"type": "precision_at_1", "value": 29.851}, {"type": "precision_at_10", "value": 6.614000000000001}, {"type": "precision_at_100", "value": 1.051}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 15.329999999999998}, {"type": "precision_at_5", "value": 10.671999999999999}, {"type": "recall_at_1", "value": 26.035999999999998}, {"type": "recall_at_10", "value": 51.396}, {"type": "recall_at_100", "value": 75.09}, {"type": "recall_at_1000", "value": 91.904}, {"type": "recall_at_3", "value": 37.378}, {"type": "recall_at_5", "value": 42.69}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWebmastersRetrieval", "type": "mteb/cqadupstack-webmasters", "config": "default", "split": "test", "revision": "160c094312a0e1facb97e55eeddb698c0abe3571"}, "metrics": [{"type": "map_at_1", "value": 23.211000000000002}, {"type": "map_at_10", "value": 32.231}, {"type": "map_at_100", "value": 33.772999999999996}, {"type": "map_at_1000", "value": 33.982}, {"type": "map_at_3", "value": 29.128}, {"type": "map_at_5", "value": 31.002999999999997}, {"type": "mrr_at_1", "value": 27.668}, {"type": "mrr_at_10", "value": 36.388}, {"type": "mrr_at_100", "value": 37.384}, {"type": "mrr_at_1000", "value": 37.44}, {"type": "mrr_at_3", "value": 33.762}, {"type": "mrr_at_5", "value": 35.234}, {"type": "ndcg_at_1", "value": 27.668}, {"type": "ndcg_at_10", "value": 38.043}, {"type": "ndcg_at_100", "value": 44.21}, {"type": "ndcg_at_1000", "value": 46.748}, {"type": "ndcg_at_3", "value": 32.981}, {"type": "ndcg_at_5", "value": 35.58}, {"type": "precision_at_1", "value": 27.668}, {"type": "precision_at_10", "value": 7.352}, {"type": "precision_at_100", "value": 1.5}, {"type": "precision_at_1000", "value": 0.23700000000000002}, {"type": "precision_at_3", "value": 15.613}, {"type": "precision_at_5", "value": 11.501999999999999}, {"type": "recall_at_1", "value": 23.211000000000002}, {"type": "recall_at_10", "value": 49.851}, {"type": "recall_at_100", "value": 77.596}, {"type": "recall_at_1000", "value": 93.683}, {"type": "recall_at_3", "value": 35.403}, {"type": "recall_at_5", "value": 42.485}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWordpressRetrieval", "type": "mteb/cqadupstack-wordpress", "config": "default", "split": "test", "revision": "4ffe81d471b1924886b33c7567bfb200e9eec5c4"}, "metrics": [{"type": "map_at_1", "value": 19.384}, {"type": "map_at_10", "value": 26.262999999999998}, {"type": "map_at_100", "value": 27.409}, {"type": "map_at_1000", "value": 27.526}, {"type": "map_at_3", "value": 23.698}, {"type": "map_at_5", "value": 25.217}, {"type": "mrr_at_1", "value": 20.702}, {"type": "mrr_at_10", "value": 27.810000000000002}, {"type": "mrr_at_100", "value": 28.863}, {"type": "mrr_at_1000", "value": 28.955}, {"type": "mrr_at_3", "value": 25.230999999999998}, {"type": "mrr_at_5", "value": 26.821}, {"type": "ndcg_at_1", "value": 20.702}, {"type": "ndcg_at_10", "value": 30.688}, {"type": "ndcg_at_100", "value": 36.138999999999996}, {"type": "ndcg_at_1000", "value": 38.984}, {"type": "ndcg_at_3", "value": 25.663000000000004}, {"type": "ndcg_at_5", "value": 28.242}, {"type": "precision_at_1", "value": 20.702}, {"type": "precision_at_10", "value": 4.954}, {"type": "precision_at_100", "value": 0.823}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 10.844}, {"type": "precision_at_5", "value": 8.096}, {"type": "recall_at_1", "value": 19.384}, {"type": "recall_at_10", "value": 42.847}, {"type": "recall_at_100", "value": 67.402}, {"type": "recall_at_1000", "value": 88.145}, {"type": "recall_at_3", "value": 29.513}, {"type": "recall_at_5", "value": 35.57}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "mteb/climate-fever", "config": "default", "split": "test", "revision": "47f2ac6acb640fc46020b02a5b59fdda04d39380"}, "metrics": [{"type": "map_at_1", "value": 14.915000000000001}, {"type": "map_at_10", "value": 25.846999999999998}, {"type": "map_at_100", "value": 27.741}, {"type": "map_at_1000", "value": 27.921000000000003}, {"type": "map_at_3", "value": 21.718}, {"type": "map_at_5", "value": 23.948}, {"type": "mrr_at_1", "value": 33.941}, {"type": "mrr_at_10", "value": 46.897}, {"type": "mrr_at_100", "value": 47.63}, {"type": "mrr_at_1000", "value": 47.658}, {"type": "mrr_at_3", "value": 43.919999999999995}, {"type": "mrr_at_5", "value": 45.783}, {"type": "ndcg_at_1", "value": 33.941}, {"type": "ndcg_at_10", "value": 35.202}, {"type": "ndcg_at_100", "value": 42.132}, {"type": "ndcg_at_1000", "value": 45.190999999999995}, {"type": "ndcg_at_3", "value": 29.68}, {"type": "ndcg_at_5", "value": 31.631999999999998}, {"type": "precision_at_1", "value": 33.941}, {"type": "precision_at_10", "value": 10.906}, {"type": "precision_at_100", "value": 1.8339999999999999}, {"type": "precision_at_1000", "value": 0.241}, {"type": "precision_at_3", "value": 22.606}, {"type": "precision_at_5", "value": 17.081}, {"type": "recall_at_1", "value": 14.915000000000001}, {"type": "recall_at_10", "value": 40.737}, {"type": "recall_at_100", "value": 64.42}, {"type": "recall_at_1000", "value": 81.435}, {"type": "recall_at_3", "value": 26.767000000000003}, {"type": "recall_at_5", "value": 32.895}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "mteb/dbpedia", "config": "default", "split": "test", "revision": "c0f706b76e590d620bd6618b3ca8efdd34e2d659"}, "metrics": [{"type": "map_at_1", "value": 8.665000000000001}, {"type": "map_at_10", "value": 19.087}, {"type": "map_at_100", "value": 26.555}, {"type": "map_at_1000", "value": 28.105999999999998}, {"type": "map_at_3", "value": 13.858999999999998}, {"type": "map_at_5", "value": 16.083}, {"type": "mrr_at_1", "value": 68.5}, {"type": "mrr_at_10", "value": 76.725}, {"type": "mrr_at_100", "value": 76.974}, {"type": "mrr_at_1000", "value": 76.981}, {"type": "mrr_at_3", "value": 75.583}, {"type": "mrr_at_5", "value": 76.208}, {"type": "ndcg_at_1", "value": 55.875}, {"type": "ndcg_at_10", "value": 41.018}, {"type": "ndcg_at_100", "value": 44.982}, {"type": "ndcg_at_1000", "value": 52.43}, {"type": "ndcg_at_3", "value": 46.534}, {"type": "ndcg_at_5", "value": 43.083}, {"type": "precision_at_1", "value": 68.5}, {"type": "precision_at_10", "value": 32.35}, {"type": "precision_at_100", "value": 10.078}, {"type": "precision_at_1000", "value": 1.957}, {"type": "precision_at_3", "value": 50.083}, {"type": "precision_at_5", "value": 41.3}, {"type": "recall_at_1", "value": 8.665000000000001}, {"type": "recall_at_10", "value": 24.596999999999998}, {"type": "recall_at_100", "value": 50.612}, {"type": "recall_at_1000", "value": 74.24}, {"type": "recall_at_3", "value": 15.337}, {"type": "recall_at_5", "value": 18.796}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 55.06500000000001}, {"type": "f1", "value": 49.827367590822035}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "mteb/fever", "config": "default", "split": "test", "revision": "bea83ef9e8fb933d90a2f1d5515737465d613e12"}, "metrics": [{"type": "map_at_1", "value": 76.059}, {"type": "map_at_10", "value": 83.625}, {"type": "map_at_100", "value": 83.845}, {"type": "map_at_1000", "value": 83.858}, {"type": "map_at_3", "value": 82.67099999999999}, {"type": "map_at_5", "value": 83.223}, {"type": "mrr_at_1", "value": 82.013}, {"type": "mrr_at_10", "value": 88.44800000000001}, {"type": "mrr_at_100", "value": 88.535}, {"type": "mrr_at_1000", "value": 88.537}, {"type": "mrr_at_3", "value": 87.854}, {"type": "mrr_at_5", "value": 88.221}, {"type": "ndcg_at_1", "value": 82.013}, {"type": "ndcg_at_10", "value": 87.128}, {"type": "ndcg_at_100", "value": 87.922}, {"type": "ndcg_at_1000", "value": 88.166}, {"type": "ndcg_at_3", "value": 85.648}, {"type": "ndcg_at_5", "value": 86.366}, {"type": "precision_at_1", "value": 82.013}, {"type": "precision_at_10", "value": 10.32}, {"type": "precision_at_100", "value": 1.093}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 32.408}, {"type": "precision_at_5", "value": 19.973}, {"type": "recall_at_1", "value": 76.059}, {"type": "recall_at_10", "value": 93.229}, {"type": "recall_at_100", "value": 96.387}, {"type": "recall_at_1000", "value": 97.916}, {"type": "recall_at_3", "value": 89.025}, {"type": "recall_at_5", "value": 90.96300000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "mteb/fiqa", "config": "default", "split": "test", "revision": "27a168819829fe9bcd655c2df245fb19452e8e06"}, "metrics": [{"type": "map_at_1", "value": 20.479}, {"type": "map_at_10", "value": 33.109}, {"type": "map_at_100", "value": 34.803}, {"type": "map_at_1000", "value": 35.003}, {"type": "map_at_3", "value": 28.967}, {"type": "map_at_5", "value": 31.385}, {"type": "mrr_at_1", "value": 40.278000000000006}, {"type": "mrr_at_10", "value": 48.929}, {"type": "mrr_at_100", "value": 49.655}, {"type": "mrr_at_1000", "value": 49.691}, {"type": "mrr_at_3", "value": 46.605000000000004}, {"type": "mrr_at_5", "value": 48.056}, {"type": "ndcg_at_1", "value": 40.278000000000006}, {"type": "ndcg_at_10", "value": 40.649}, {"type": "ndcg_at_100", "value": 47.027}, {"type": "ndcg_at_1000", "value": 50.249}, {"type": "ndcg_at_3", "value": 37.364000000000004}, {"type": "ndcg_at_5", "value": 38.494}, {"type": "precision_at_1", "value": 40.278000000000006}, {"type": "precision_at_10", "value": 11.327}, {"type": "precision_at_100", "value": 1.802}, {"type": "precision_at_1000", "value": 0.23700000000000002}, {"type": "precision_at_3", "value": 25.102999999999998}, {"type": "precision_at_5", "value": 18.457}, {"type": "recall_at_1", "value": 20.479}, {"type": "recall_at_10", "value": 46.594}, {"type": "recall_at_100", "value": 71.101}, {"type": "recall_at_1000", "value": 90.31099999999999}, {"type": "recall_at_3", "value": 33.378}, {"type": "recall_at_5", "value": 39.587}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "mteb/hotpotqa", "config": "default", "split": "test", "revision": "ab518f4d6fcca38d87c25209f94beba119d02014"}, "metrics": [{"type": "map_at_1", "value": 36.59}, {"type": "map_at_10", "value": 58.178}, {"type": "map_at_100", "value": 59.095}, {"type": "map_at_1000", "value": 59.16400000000001}, {"type": "map_at_3", "value": 54.907}, {"type": "map_at_5", "value": 56.89999999999999}, {"type": "mrr_at_1", "value": 73.18}, {"type": "mrr_at_10", "value": 79.935}, {"type": "mrr_at_100", "value": 80.16799999999999}, {"type": "mrr_at_1000", "value": 80.17800000000001}, {"type": "mrr_at_3", "value": 78.776}, {"type": "mrr_at_5", "value": 79.522}, {"type": "ndcg_at_1", "value": 73.18}, {"type": "ndcg_at_10", "value": 66.538}, {"type": "ndcg_at_100", "value": 69.78}, {"type": "ndcg_at_1000", "value": 71.102}, {"type": "ndcg_at_3", "value": 61.739}, {"type": "ndcg_at_5", "value": 64.35600000000001}, {"type": "precision_at_1", "value": 73.18}, {"type": "precision_at_10", "value": 14.035}, {"type": "precision_at_100", "value": 1.657}, {"type": "precision_at_1000", "value": 0.183}, {"type": "precision_at_3", "value": 39.684999999999995}, {"type": "precision_at_5", "value": 25.885}, {"type": "recall_at_1", "value": 36.59}, {"type": "recall_at_10", "value": 70.176}, {"type": "recall_at_100", "value": 82.836}, {"type": "recall_at_1000", "value": 91.526}, {"type": "recall_at_3", "value": 59.526999999999994}, {"type": "recall_at_5", "value": 64.713}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 90.1472}, {"type": "ap", "value": 85.73994227076815}, {"type": "f1", "value": 90.1271700788608}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "mteb/msmarco", "config": "default", "split": "dev", "revision": "c5a29a104738b98a9e76336939199e264163d4a0"}, "metrics": [{"type": "map_at_1", "value": 21.689}, {"type": "map_at_10", "value": 33.518}, {"type": "map_at_100", "value": 34.715}, {"type": "map_at_1000", "value": 34.766000000000005}, {"type": "map_at_3", "value": 29.781000000000002}, {"type": "map_at_5", "value": 31.838}, {"type": "mrr_at_1", "value": 22.249}, {"type": "mrr_at_10", "value": 34.085}, {"type": "mrr_at_100", "value": 35.223}, {"type": "mrr_at_1000", "value": 35.266999999999996}, {"type": "mrr_at_3", "value": 30.398999999999997}, {"type": "mrr_at_5", "value": 32.437}, {"type": "ndcg_at_1", "value": 22.249}, {"type": "ndcg_at_10", "value": 40.227000000000004}, {"type": "ndcg_at_100", "value": 45.961999999999996}, {"type": "ndcg_at_1000", "value": 47.248000000000005}, {"type": "ndcg_at_3", "value": 32.566}, {"type": "ndcg_at_5", "value": 36.229}, {"type": "precision_at_1", "value": 22.249}, {"type": "precision_at_10", "value": 6.358}, {"type": "precision_at_100", "value": 0.923}, {"type": "precision_at_1000", "value": 0.10300000000000001}, {"type": "precision_at_3", "value": 13.83}, {"type": "precision_at_5", "value": 10.145999999999999}, {"type": "recall_at_1", "value": 21.689}, {"type": "recall_at_10", "value": 60.92999999999999}, {"type": "recall_at_100", "value": 87.40599999999999}, {"type": "recall_at_1000", "value": 97.283}, {"type": "recall_at_3", "value": 40.01}, {"type": "recall_at_5", "value": 48.776}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 95.28727770177838}, {"type": "f1", "value": 95.02577308660041}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 79.5736434108527}, {"type": "f1", "value": 61.2451202054398}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 76.01210490921318}, {"type": "f1", "value": 73.70188053982473}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 79.33422999327504}, {"type": "f1", "value": 79.48369022509658}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 34.70891567267726}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 32.15203494451706}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.919517862194173}, {"type": "mrr", "value": 33.15466289140483}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "mteb/nfcorpus", "config": "default", "split": "test", "revision": "ec0fa4fe99da2ff19ca1214b7966684033a58814"}, "metrics": [{"type": "map_at_1", "value": 5.992}, {"type": "map_at_10", "value": 13.197000000000001}, {"type": "map_at_100", "value": 16.907}, {"type": "map_at_1000", "value": 18.44}, {"type": "map_at_3", "value": 9.631}, {"type": "map_at_5", "value": 11.243}, {"type": "mrr_at_1", "value": 44.272}, {"type": "mrr_at_10", "value": 53.321}, {"type": "mrr_at_100", "value": 53.903}, {"type": "mrr_at_1000", "value": 53.952999999999996}, {"type": "mrr_at_3", "value": 51.393}, {"type": "mrr_at_5", "value": 52.708999999999996}, {"type": "ndcg_at_1", "value": 42.415000000000006}, {"type": "ndcg_at_10", "value": 34.921}, {"type": "ndcg_at_100", "value": 32.384}, {"type": "ndcg_at_1000", "value": 41.260000000000005}, {"type": "ndcg_at_3", "value": 40.186}, {"type": "ndcg_at_5", "value": 37.89}, {"type": "precision_at_1", "value": 44.272}, {"type": "precision_at_10", "value": 26.006}, {"type": "precision_at_100", "value": 8.44}, {"type": "precision_at_1000", "value": 2.136}, {"type": "precision_at_3", "value": 37.977}, {"type": "precision_at_5", "value": 32.755}, {"type": "recall_at_1", "value": 5.992}, {"type": "recall_at_10", "value": 17.01}, {"type": "recall_at_100", "value": 33.080999999999996}, {"type": "recall_at_1000", "value": 65.054}, {"type": "recall_at_3", "value": 10.528}, {"type": "recall_at_5", "value": 13.233}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "mteb/nq", "config": "default", "split": "test", "revision": "b774495ed302d8c44a3a7ea25c90dbce03968f31"}, "metrics": [{"type": "map_at_1", "value": 28.871999999999996}, {"type": "map_at_10", "value": 43.286}, {"type": "map_at_100", "value": 44.432}, {"type": "map_at_1000", "value": 44.464999999999996}, {"type": "map_at_3", "value": 38.856}, {"type": "map_at_5", "value": 41.514}, {"type": "mrr_at_1", "value": 32.619}, {"type": "mrr_at_10", "value": 45.75}, {"type": "mrr_at_100", "value": 46.622}, {"type": "mrr_at_1000", "value": 46.646}, {"type": "mrr_at_3", "value": 41.985}, {"type": "mrr_at_5", "value": 44.277}, {"type": "ndcg_at_1", "value": 32.59}, {"type": "ndcg_at_10", "value": 50.895999999999994}, {"type": "ndcg_at_100", "value": 55.711999999999996}, {"type": "ndcg_at_1000", "value": 56.48800000000001}, {"type": "ndcg_at_3", "value": 42.504999999999995}, {"type": "ndcg_at_5", "value": 46.969}, {"type": "precision_at_1", "value": 32.59}, {"type": "precision_at_10", "value": 8.543000000000001}, {"type": "precision_at_100", "value": 1.123}, {"type": "precision_at_1000", "value": 0.12}, {"type": "precision_at_3", "value": 19.448}, {"type": "precision_at_5", "value": 14.218}, {"type": "recall_at_1", "value": 28.871999999999996}, {"type": "recall_at_10", "value": 71.748}, {"type": "recall_at_100", "value": 92.55499999999999}, {"type": "recall_at_1000", "value": 98.327}, {"type": "recall_at_3", "value": 49.944}, {"type": "recall_at_5", "value": 60.291}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "mteb/quora", "config": "default", "split": "test", "revision": "e4e08e0b7dbe3c8700f0daef558ff32256715259"}, "metrics": [{"type": "map_at_1", "value": 70.664}, {"type": "map_at_10", "value": 84.681}, {"type": "map_at_100", "value": 85.289}, {"type": "map_at_1000", "value": 85.306}, {"type": "map_at_3", "value": 81.719}, {"type": "map_at_5", "value": 83.601}, {"type": "mrr_at_1", "value": 81.35}, {"type": "mrr_at_10", "value": 87.591}, {"type": "mrr_at_100", "value": 87.691}, {"type": "mrr_at_1000", "value": 87.693}, {"type": "mrr_at_3", "value": 86.675}, {"type": "mrr_at_5", "value": 87.29299999999999}, {"type": "ndcg_at_1", "value": 81.33}, {"type": "ndcg_at_10", "value": 88.411}, {"type": "ndcg_at_100", "value": 89.579}, {"type": "ndcg_at_1000", "value": 89.687}, {"type": "ndcg_at_3", "value": 85.613}, {"type": "ndcg_at_5", "value": 87.17}, {"type": "precision_at_1", "value": 81.33}, {"type": "precision_at_10", "value": 13.422}, {"type": "precision_at_100", "value": 1.5270000000000001}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.463}, {"type": "precision_at_5", "value": 24.646}, {"type": "recall_at_1", "value": 70.664}, {"type": "recall_at_10", "value": 95.54}, {"type": "recall_at_100", "value": 99.496}, {"type": "recall_at_1000", "value": 99.978}, {"type": "recall_at_3", "value": 87.481}, {"type": "recall_at_5", "value": 91.88499999999999}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 55.40341814991112}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "v_measure", "value": 61.231318481346655}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "mteb/scidocs", "config": "default", "split": "test", "revision": "f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88"}, "metrics": [{"type": "map_at_1", "value": 4.833}, {"type": "map_at_10", "value": 13.149}, {"type": "map_at_100", "value": 15.578}, {"type": "map_at_1000", "value": 15.963}, {"type": "map_at_3", "value": 9.269}, {"type": "map_at_5", "value": 11.182}, {"type": "mrr_at_1", "value": 23.9}, {"type": "mrr_at_10", "value": 35.978}, {"type": "mrr_at_100", "value": 37.076}, {"type": "mrr_at_1000", "value": 37.126}, {"type": "mrr_at_3", "value": 32.333}, {"type": "mrr_at_5", "value": 34.413}, {"type": "ndcg_at_1", "value": 23.9}, {"type": "ndcg_at_10", "value": 21.823}, {"type": "ndcg_at_100", "value": 30.833}, {"type": "ndcg_at_1000", "value": 36.991}, {"type": "ndcg_at_3", "value": 20.465}, {"type": "ndcg_at_5", "value": 17.965999999999998}, {"type": "precision_at_1", "value": 23.9}, {"type": "precision_at_10", "value": 11.49}, {"type": "precision_at_100", "value": 2.444}, {"type": "precision_at_1000", "value": 0.392}, {"type": "precision_at_3", "value": 19.3}, {"type": "precision_at_5", "value": 15.959999999999999}, {"type": "recall_at_1", "value": 4.833}, {"type": "recall_at_10", "value": 23.294999999999998}, {"type": "recall_at_100", "value": 49.63}, {"type": "recall_at_1000", "value": 79.49199999999999}, {"type": "recall_at_3", "value": 11.732}, {"type": "recall_at_5", "value": 16.167}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.62938108735759}, {"type": "cos_sim_spearman", "value": 80.30777094408789}, {"type": "euclidean_pearson", "value": 82.94516686659536}, {"type": "euclidean_spearman", "value": 80.34489663248169}, {"type": "manhattan_pearson", "value": 82.85830094736245}, {"type": "manhattan_spearman", "value": 80.24902623215449}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.23777464247604}, {"type": "cos_sim_spearman", "value": 75.75714864112797}, {"type": "euclidean_pearson", "value": 82.33806918604493}, {"type": "euclidean_spearman", "value": 75.45282124387357}, {"type": "manhattan_pearson", "value": 82.32555620660538}, {"type": "manhattan_spearman", "value": 75.49228731684082}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.88151620954451}, {"type": "cos_sim_spearman", "value": 86.08377598473446}, {"type": "euclidean_pearson", "value": 85.36958329369413}, {"type": "euclidean_spearman", "value": 86.10274219670679}, {"type": "manhattan_pearson", "value": 85.25873897594711}, {"type": "manhattan_spearman", "value": 85.98096461661584}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.29360558735978}, {"type": "cos_sim_spearman", "value": 82.28284203795577}, {"type": "euclidean_pearson", "value": 83.81636655536633}, {"type": "euclidean_spearman", "value": 82.24340438530236}, {"type": "manhattan_pearson", "value": 83.83914453428608}, {"type": "manhattan_spearman", "value": 82.28391354080694}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.47344180426744}, {"type": "cos_sim_spearman", "value": 88.90045649789438}, {"type": "euclidean_pearson", "value": 88.43020815961273}, {"type": "euclidean_spearman", "value": 89.0087449011776}, {"type": "manhattan_pearson", "value": 88.37601826505525}, {"type": "manhattan_spearman", "value": 88.96756360690617}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.35997025304613}, {"type": "cos_sim_spearman", "value": 85.18237675717147}, {"type": "euclidean_pearson", "value": 84.46478196990202}, {"type": "euclidean_spearman", "value": 85.27748677712205}, {"type": "manhattan_pearson", "value": 84.29342543953123}, {"type": "manhattan_spearman", "value": 85.10579612516567}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.56668329596836}, {"type": "cos_sim_spearman", "value": 88.72837234129177}, {"type": "euclidean_pearson", "value": 89.39395650897828}, {"type": "euclidean_spearman", "value": 88.82001247906778}, {"type": "manhattan_pearson", "value": 89.41735354368878}, {"type": "manhattan_spearman", "value": 88.95159141850039}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 67.466167902991}, {"type": "cos_sim_spearman", "value": 68.54466147197274}, {"type": "euclidean_pearson", "value": 69.35551179564695}, {"type": "euclidean_spearman", "value": 68.75455717749132}, {"type": "manhattan_pearson", "value": 69.42432368208264}, {"type": "manhattan_spearman", "value": 68.83203709670562}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.33241300373689}, {"type": "cos_sim_spearman", "value": 86.97909372129874}, {"type": "euclidean_pearson", "value": 86.99526113559924}, {"type": "euclidean_spearman", "value": 87.02644372623219}, {"type": "manhattan_pearson", "value": 86.78744182759846}, {"type": "manhattan_spearman", "value": 86.8886180198196}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 86.18374413668717}, {"type": "mrr", "value": 95.93213068703264}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "mteb/scifact", "config": "default", "split": "test", "revision": "0228b52cf27578f30900b9e5271d331663a030d7"}, "metrics": [{"type": "map_at_1", "value": 58.31699999999999}, {"type": "map_at_10", "value": 67.691}, {"type": "map_at_100", "value": 68.201}, {"type": "map_at_1000", "value": 68.232}, {"type": "map_at_3", "value": 64.47800000000001}, {"type": "map_at_5", "value": 66.51}, {"type": "mrr_at_1", "value": 61.0}, {"type": "mrr_at_10", "value": 68.621}, {"type": "mrr_at_100", "value": 68.973}, {"type": "mrr_at_1000", "value": 69.002}, {"type": "mrr_at_3", "value": 66.111}, {"type": "mrr_at_5", "value": 67.578}, {"type": "ndcg_at_1", "value": 61.0}, {"type": "ndcg_at_10", "value": 72.219}, {"type": "ndcg_at_100", "value": 74.397}, {"type": "ndcg_at_1000", "value": 75.021}, {"type": "ndcg_at_3", "value": 66.747}, {"type": "ndcg_at_5", "value": 69.609}, {"type": "precision_at_1", "value": 61.0}, {"type": "precision_at_10", "value": 9.6}, {"type": "precision_at_100", "value": 1.08}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 25.667}, {"type": "precision_at_5", "value": 17.267}, {"type": "recall_at_1", "value": 58.31699999999999}, {"type": "recall_at_10", "value": 85.233}, {"type": "recall_at_100", "value": 95.167}, {"type": "recall_at_1000", "value": 99.667}, {"type": "recall_at_3", "value": 70.589}, {"type": "recall_at_5", "value": 77.628}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.83267326732673}, {"type": "cos_sim_ap", "value": 96.13707107038228}, {"type": "cos_sim_f1", "value": 91.48830263812842}, {"type": "cos_sim_precision", "value": 91.0802775024777}, {"type": "cos_sim_recall", "value": 91.9}, {"type": "dot_accuracy", "value": 99.83069306930693}, {"type": "dot_ap", "value": 96.21199069147254}, {"type": "dot_f1", "value": 91.36295556665004}, {"type": "dot_precision", "value": 91.22632103688933}, {"type": "dot_recall", "value": 91.5}, {"type": "euclidean_accuracy", "value": 99.83267326732673}, {"type": "euclidean_ap", "value": 96.08957801367436}, {"type": "euclidean_f1", "value": 91.33004926108374}, {"type": "euclidean_precision", "value": 90.0}, {"type": "euclidean_recall", "value": 92.7}, {"type": "manhattan_accuracy", "value": 99.83564356435643}, {"type": "manhattan_ap", "value": 96.10534946461945}, {"type": "manhattan_f1", "value": 91.74950298210736}, {"type": "manhattan_precision", "value": 91.20553359683794}, {"type": "manhattan_recall", "value": 92.30000000000001}, {"type": "max_accuracy", "value": 99.83564356435643}, {"type": "max_ap", "value": 96.21199069147254}, {"type": "max_f1", "value": 91.74950298210736}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 62.045718843534736}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 36.6501777041092}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 52.963913408053955}, {"type": "mrr", "value": 53.87972423818012}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.44195730764998}, {"type": "cos_sim_spearman", "value": 30.59626288679397}, {"type": "dot_pearson", "value": 30.22974492404086}, {"type": "dot_spearman", "value": 29.345245972906497}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "mteb/trec-covid", "config": "default", "split": "test", "revision": "bb9466bac8153a0349341eb1b22e06409e78ef4e"}, "metrics": [{"type": "map_at_1", "value": 0.24}, {"type": "map_at_10", "value": 2.01}, {"type": "map_at_100", "value": 11.928999999999998}, {"type": "map_at_1000", "value": 29.034}, {"type": "map_at_3", "value": 0.679}, {"type": "map_at_5", "value": 1.064}, {"type": "mrr_at_1", "value": 92.0}, {"type": "mrr_at_10", "value": 96.0}, {"type": "mrr_at_100", "value": 96.0}, {"type": "mrr_at_1000", "value": 96.0}, {"type": "mrr_at_3", "value": 96.0}, {"type": "mrr_at_5", "value": 96.0}, {"type": "ndcg_at_1", "value": 87.0}, {"type": "ndcg_at_10", "value": 80.118}, {"type": "ndcg_at_100", "value": 60.753}, {"type": "ndcg_at_1000", "value": 54.632999999999996}, {"type": "ndcg_at_3", "value": 83.073}, {"type": "ndcg_at_5", "value": 80.733}, {"type": "precision_at_1", "value": 92.0}, {"type": "precision_at_10", "value": 84.8}, {"type": "precision_at_100", "value": 62.019999999999996}, {"type": "precision_at_1000", "value": 24.028}, {"type": "precision_at_3", "value": 87.333}, {"type": "precision_at_5", "value": 85.2}, {"type": "recall_at_1", "value": 0.24}, {"type": "recall_at_10", "value": 2.205}, {"type": "recall_at_100", "value": 15.068000000000001}, {"type": "recall_at_1000", "value": 51.796}, {"type": "recall_at_3", "value": 0.698}, {"type": "recall_at_5", "value": 1.1199999999999999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "mteb/touche2020", "config": "default", "split": "test", "revision": "a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f"}, "metrics": [{"type": "map_at_1", "value": 3.066}, {"type": "map_at_10", "value": 9.219}, {"type": "map_at_100", "value": 15.387}, {"type": "map_at_1000", "value": 16.957}, {"type": "map_at_3", "value": 5.146}, {"type": "map_at_5", "value": 6.6739999999999995}, {"type": "mrr_at_1", "value": 40.816}, {"type": "mrr_at_10", "value": 50.844}, {"type": "mrr_at_100", "value": 51.664}, {"type": "mrr_at_1000", "value": 51.664}, {"type": "mrr_at_3", "value": 46.259}, {"type": "mrr_at_5", "value": 49.116}, {"type": "ndcg_at_1", "value": 37.755}, {"type": "ndcg_at_10", "value": 23.477}, {"type": "ndcg_at_100", "value": 36.268}, {"type": "ndcg_at_1000", "value": 47.946}, {"type": "ndcg_at_3", "value": 25.832}, {"type": "ndcg_at_5", "value": 24.235}, {"type": "precision_at_1", "value": 40.816}, {"type": "precision_at_10", "value": 20.204}, {"type": "precision_at_100", "value": 7.611999999999999}, {"type": "precision_at_1000", "value": 1.543}, {"type": "precision_at_3", "value": 25.169999999999998}, {"type": "precision_at_5", "value": 23.265}, {"type": "recall_at_1", "value": 3.066}, {"type": "recall_at_10", "value": 14.985999999999999}, {"type": "recall_at_100", "value": 47.902}, {"type": "recall_at_1000", "value": 83.56400000000001}, {"type": "recall_at_3", "value": 5.755}, {"type": "recall_at_5", "value": 8.741999999999999}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 69.437}, {"type": "ap", "value": 12.844066827082706}, {"type": "f1", "value": 52.74974809872495}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 61.26768534238823}, {"type": "f1", "value": 61.65100187399282}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 49.860968711078804}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 85.7423854085951}, {"type": "cos_sim_ap", "value": 73.47560303339571}, {"type": "cos_sim_f1", "value": 67.372778183589}, {"type": "cos_sim_precision", "value": 62.54520795660036}, {"type": "cos_sim_recall", "value": 73.00791556728232}, {"type": "dot_accuracy", "value": 85.36091077069798}, {"type": "dot_ap", "value": 72.42521572307255}, {"type": "dot_f1", "value": 66.90576304724215}, {"type": "dot_precision", "value": 62.96554934823091}, {"type": "dot_recall", "value": 71.37203166226914}, {"type": "euclidean_accuracy", "value": 85.76026703224653}, {"type": "euclidean_ap", "value": 73.44852563860128}, {"type": "euclidean_f1", "value": 67.3}, {"type": "euclidean_precision", "value": 63.94299287410926}, {"type": "euclidean_recall", "value": 71.02902374670185}, {"type": "manhattan_accuracy", "value": 85.7423854085951}, {"type": "manhattan_ap", "value": 73.2635034755551}, {"type": "manhattan_f1", "value": 67.3180263800684}, {"type": "manhattan_precision", "value": 62.66484765802638}, {"type": "manhattan_recall", "value": 72.71767810026385}, {"type": "max_accuracy", "value": 85.76026703224653}, {"type": "max_ap", "value": 73.47560303339571}, {"type": "max_f1", "value": 67.372778183589}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.67543757519307}, {"type": "cos_sim_ap", "value": 85.35516518531304}, {"type": "cos_sim_f1", "value": 77.58197635511934}, {"type": "cos_sim_precision", "value": 75.01078360891445}, {"type": "cos_sim_recall", "value": 80.33569448721897}, {"type": "dot_accuracy", "value": 87.61400240617844}, {"type": "dot_ap", "value": 83.0774968268665}, {"type": "dot_f1", "value": 75.68229012162561}, {"type": "dot_precision", "value": 72.99713876967095}, {"type": "dot_recall", "value": 78.57252848783493}, {"type": "euclidean_accuracy", "value": 88.73753250281368}, {"type": "euclidean_ap", "value": 85.48043564821317}, {"type": "euclidean_f1", "value": 77.75975862719216}, {"type": "euclidean_precision", "value": 76.21054187920456}, {"type": "euclidean_recall", "value": 79.37326763166}, {"type": "manhattan_accuracy", "value": 88.75111576823068}, {"type": "manhattan_ap", "value": 85.44993439423668}, {"type": "manhattan_f1", "value": 77.6861329994845}, {"type": "manhattan_precision", "value": 74.44601270289344}, {"type": "manhattan_recall", "value": 81.22112719433323}, {"type": "max_accuracy", "value": 88.75111576823068}, {"type": "max_ap", "value": 85.48043564821317}, {"type": "max_f1", "value": 77.75975862719216}]}]}]}
dataset
null
550
bartowski/Einstein-v6.1-Llama3-8B-exl2
bartowski
text-generation
[ "axolotl", "generated_from_trainer", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama", "llama3", "text-generation", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:HuggingFaceH4/no_robots", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:WizardLM/WizardLM_evol_instruct_70k", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
2024-04-23T16:27:12Z
2024-04-24T16:56:31+00:00
3
2
--- base_model: meta-llama/Meta-Llama-3-8B datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction - totally-not-an-llm/EverythingLM-data-V3 - HuggingFaceH4/no_robots - OpenAssistant/oasst_top1_2023-08-25 - WizardLM/WizardLM_evol_instruct_70k language: - en license: other pipeline_tag: text-generation tags: - axolotl - generated_from_trainer - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math - llama - llama3 quantized_by: bartowski --- ## Exllama v2 Quantizations of Einstein-v6.1-Llama3-8B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2 Einstein-v6.1-Llama3-8B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-exl2 --revision 6_5 --local-dir Einstein-v6.1-Llama3-8B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-exl2 --revision 6_5 --local-dir Einstein-v6.1-Llama3-8B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
[ "SCIQ" ]
Non_BioNLP
## Exllama v2 Quantizations of Einstein-v6.1-Llama3-8B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2 Einstein-v6.1-Llama3-8B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-exl2 --revision 6_5 --local-dir Einstein-v6.1-Llama3-8B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-exl2 --revision 6_5 --local-dir Einstein-v6.1-Llama3-8B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"base_model": "meta-llama/Meta-Llama-3-8B", "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval", "allenai/WildChat", "microsoft/orca-math-word-problems-200k", "openchat/openchat_sharegpt4_dataset", "teknium/GPTeacher-General-Instruct", "m-a-p/CodeFeedback-Filtered-Instruction", "totally-not-an-llm/EverythingLM-data-V3", "HuggingFaceH4/no_robots", "OpenAssistant/oasst_top1_2023-08-25", "WizardLM/WizardLM_evol_instruct_70k"], "language": ["en"], "license": "other", "pipeline_tag": "text-generation", "tags": ["axolotl", "generated_from_trainer", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama", "llama3"], "quantized_by": "bartowski"}
dataset
null
551
BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM
BioMistral
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:2402.10373", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
2024-02-17T18:46:15Z
2024-02-19T15:36:58+00:00
40
3
--- {} --- <p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. **Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
[ "MEDQA", "PUBMEDQA" ]
BioNLP
<p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. **Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
{}
dataset
null
552
blockblockblock/Dark-Miqu-70B-bpw4-exl2
blockblockblock
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2403.19522", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
2024-05-11T15:04:04Z
2024-05-11T15:07:46+00:00
14
2
--- license: other --- ![Dark-Miqu.png](Dark-Miqu.png) ***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).* A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere! This model **excels** at writing Dark/Grimdark fantasy (see examples below). # Model background Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0). This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted. The model was created in two stages: - First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results. - In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model. # Prompting format Vicuna format is preferred: ``` USER: {prompt} ASSISTANT: ``` Mistral and Alpaca formats are also supported: ``` [INST] {prompt} [/INST] ``` ``` ### Instruction: {prompt} ### Response: ``` # Licence and usage restrictions [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only. # Mergekit configuration The following YAML configuration was used to produce this model: ```yaml name: midnight-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: sophosympatheia/Midnight-Rose-70B-v2.0.3 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: euryale-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/Euryale-1.3-L2-70B base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: winter-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/WinterGoddess-1.4x-70B-L2 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: dark-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: midnight-miqu-70b - model: euryale-miqu-70b - model: winter-miqu-70b base_model: 152334H/miqu-1-70b-sf merge_method: model_stock dtype: float16 ``` ## Key configuration details: - '`merge_method: slerp`' uses spherical linear interpolation for merging models. - '`parameters: t`' controls the interpolation ratios between models. - '`embed_slerp: true`' applies slerp to the embedding layers. - '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method. See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings. **NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file. # Example stories The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0: ## Dark fantasy stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes ### ASSISTANT Chapter One: The Crimson Dawn The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks. His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death. Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces. He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings. Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close. He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out. He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live." She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat." She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun." He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air. "Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick." She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend. "I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!" Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk." He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape. "Tell me," he said, his voice a whisper. "Tell me, and I'll end this." She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve. Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way. He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head. And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous. Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him. They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured. In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain. It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it. He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive. But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight. He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost. And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it. It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows. It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters. It was a story that had only just begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst. ### ASSISTANT The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat. He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run. Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life. His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place. As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start. The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts. He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced. "Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away." The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted. He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets. "What have I done?" he asked the mirror, but it gave no answer. He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous. He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his. The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach. But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time. The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character. ### ASSISTANT The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying. He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair. *Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?* But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do. He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart. "Shh," he whispered, his voice a soothing balm. "It's over now." The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here." The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me." As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name. *Elara.* The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end. He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release. As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield. "Anyone here need my services?" he asked, his voice devoid of emotion. One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it." The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged. *Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.* He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance. "No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly." The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different. "Why?" he asked, genuinely curious. The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up." The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on. He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick." The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening. The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?* But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation. ### ASSISTANT The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole. "What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?" The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window. "Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot. The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered. As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely? He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies. "You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice. Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North." The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom. "What do you think it means?" he pressed, unable to keep the desperation at bay. Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here." The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine. As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone. "We have to get out of here," he croaked, his voice barely above a whisper. Grimgor just grunted in agreement, his gaze never leaving the window. The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment. But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it. "Grimgor," he hissed, tugging on the larger man's arm. "Look." Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it. The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe? The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge. "Who are you?" he demanded, trying to sound braver than he felt. The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice. "I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom." With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell. The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains. "We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here." The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown. Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other. "What now?" the Fox asked, his voice barely above a whisper. Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine. "Now," he said, "we find out what's really going on." And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned. </details> ## Dark sci-fi stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan. ### ASSISTANT Chapter 1: Adrift The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism. In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford. Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs. She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel. As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay. Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills. "Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now." She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement. "Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines." Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper. "They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart." The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others. "It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work." Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!" The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over? As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them. And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart. As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive? </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in. ### ASSISTANT Chapter 1: Ashes of Exodus The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began. As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago. Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself. But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth. He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely. Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online. And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale. His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps. "What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear. Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly. Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened." The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return. As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived. But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void. He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next." Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here." Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft." Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet. As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened. And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why? The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought. He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look." The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be. As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand. But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one. He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here... ### ASSISTANT Chapter 1: The Biodome The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet. Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known. "Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal." He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland. But today was not that day. Today, it was just another tomb waiting to be sealed. As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked. "What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?" Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion. "I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good." Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead. The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine. "Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!" Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake. With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom. "Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!" But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness. And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more. When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones. Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten. Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard. And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors. But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile. "Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity. And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone. ### ASSISTANT Chapter 1 - Whispers from the Void The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos. His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge. A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean. The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake. Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger. He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now. As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again. But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act. The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe. As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch. But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home. The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything. What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence. Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind. He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here. Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale. </details> ## Miscellaneous stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today. ### ASSISTANT In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash. His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used. The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations. Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long? He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes. He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff. Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news." The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply. "The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years." Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?" "We have enough for six, maybe seven if we ration carefully. But not twelve." The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close. "We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes." Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival. As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure. In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now. He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary. Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking. Samuel shook his head. "I don't know." "We can't just let six of us die," she said, her voice rising in panic. "There has to be another way." He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?" Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up." Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic." She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic." They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic." Samuel raised an eyebrow. "What?" "We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface." He stared at her in disbelief. "You can't be serious." "It's not ideal," she admitted. "But it's better than just letting them die down here." Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have? As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor. What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls. Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel. And all he could do was watch as the world above claimed them, one by one. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods. ### ASSISTANT Chapter One: The Golden Veil The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune. In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely. His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything. Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within. Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner. One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world. "Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?" The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion. "No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me." He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within. "I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it." The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box." As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for. The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness. But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher. Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones. "I must see this box for myself," he said, his voice barely above a whisper. "Take me to it." The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels. There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity. As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls. A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe." Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe. "But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds." And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean? As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him! He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety. And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself. But little did he know that the true mystery was only just beginning... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them. ### ASSISTANT Chapter 1 The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live. The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light. I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line." I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them. A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth. I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness. When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming." And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step. I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there... But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction. "Halt!" one of them yelled, his voice harsh and commanding. "Hands up!" I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him. "Run," he whispered, his voice barely audible. "Leave me." I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything. But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat. "Nein," I whispered, my voice choked with grief and rage. "Nein!" I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us. With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness. As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me. But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform. ### ASSISTANT In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel. His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work. A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips. The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface. Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival. "You're late," Chigurh said finally, his voice low and gravelly. The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic." Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement. Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket. "You know what needs to be done," the man in the suit said, his voice barely above a whisper. Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress. Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary. He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose. Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die. He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction. As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move. Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw. </details> Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)!
[ "BEAR" ]
Non_BioNLP
![Dark-Miqu.png](Dark-Miqu.png) ***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).* A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere! This model **excels** at writing Dark/Grimdark fantasy (see examples below). # Model background Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0). This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted. The model was created in two stages: - First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results. - In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model. # Prompting format Vicuna format is preferred: ``` USER: {prompt} ASSISTANT: ``` Mistral and Alpaca formats are also supported: ``` [INST] {prompt} [/INST] ``` ``` ### Instruction: {prompt} ### Response: ``` # Licence and usage restrictions [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only. # Mergekit configuration The following YAML configuration was used to produce this model: ```yaml name: midnight-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: sophosympatheia/Midnight-Rose-70B-v2.0.3 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: euryale-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/Euryale-1.3-L2-70B base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: winter-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/WinterGoddess-1.4x-70B-L2 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: dark-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: midnight-miqu-70b - model: euryale-miqu-70b - model: winter-miqu-70b base_model: 152334H/miqu-1-70b-sf merge_method: model_stock dtype: float16 ``` ## Key configuration details: - '`merge_method: slerp`' uses spherical linear interpolation for merging models. - '`parameters: t`' controls the interpolation ratios between models. - '`embed_slerp: true`' applies slerp to the embedding layers. - '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method. See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings. **NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file. # Example stories The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0: ## Dark fantasy stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes ### ASSISTANT Chapter One: The Crimson Dawn The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks. His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death. Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces. He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings. Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close. He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out. He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live." She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat." She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun." He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air. "Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick." She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend. "I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!" Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk." He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape. "Tell me," he said, his voice a whisper. "Tell me, and I'll end this." She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve. Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way. He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head. And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous. Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him. They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured. In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain. It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it. He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive. But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight. He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost. And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it. It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows. It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters. It was a story that had only just begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst. ### ASSISTANT The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat. He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run. Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life. His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place. As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start. The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts. He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced. "Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away." The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted. He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets. "What have I done?" he asked the mirror, but it gave no answer. He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous. He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his. The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach. But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time. The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character. ### ASSISTANT The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying. He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair. *Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?* But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do. He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart. "Shh," he whispered, his voice a soothing balm. "It's over now." The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here." The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me." As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name. *Elara.* The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end. He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release. As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield. "Anyone here need my services?" he asked, his voice devoid of emotion. One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it." The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged. *Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.* He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance. "No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly." The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different. "Why?" he asked, genuinely curious. The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up." The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on. He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick." The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening. The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?* But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation. ### ASSISTANT The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole. "What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?" The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window. "Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot. The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered. As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely? He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies. "You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice. Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North." The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom. "What do you think it means?" he pressed, unable to keep the desperation at bay. Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here." The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine. As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone. "We have to get out of here," he croaked, his voice barely above a whisper. Grimgor just grunted in agreement, his gaze never leaving the window. The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment. But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it. "Grimgor," he hissed, tugging on the larger man's arm. "Look." Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it. The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe? The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge. "Who are you?" he demanded, trying to sound braver than he felt. The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice. "I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom." With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell. The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains. "We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here." The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown. Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other. "What now?" the Fox asked, his voice barely above a whisper. Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine. "Now," he said, "we find out what's really going on." And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned. </details> ## Dark sci-fi stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan. ### ASSISTANT Chapter 1: Adrift The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism. In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford. Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs. She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel. As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay. Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills. "Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now." She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement. "Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines." Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper. "They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart." The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others. "It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work." Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!" The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over? As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them. And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart. As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive? </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in. ### ASSISTANT Chapter 1: Ashes of Exodus The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began. As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago. Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself. But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth. He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely. Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online. And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale. His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps. "What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear. Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly. Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened." The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return. As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived. But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void. He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next." Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here." Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft." Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet. As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened. And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why? The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought. He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look." The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be. As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand. But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one. He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here... ### ASSISTANT Chapter 1: The Biodome The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet. Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known. "Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal." He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland. But today was not that day. Today, it was just another tomb waiting to be sealed. As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked. "What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?" Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion. "I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good." Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead. The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine. "Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!" Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake. With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom. "Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!" But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness. And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more. When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones. Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten. Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard. And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors. But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile. "Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity. And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone. ### ASSISTANT Chapter 1 - Whispers from the Void The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos. His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge. A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean. The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake. Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger. He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now. As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again. But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act. The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe. As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch. But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home. The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything. What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence. Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind. He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here. Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale. </details> ## Miscellaneous stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today. ### ASSISTANT In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash. His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used. The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations. Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long? He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes. He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff. Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news." The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply. "The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years." Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?" "We have enough for six, maybe seven if we ration carefully. But not twelve." The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close. "We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes." Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival. As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure. In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now. He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary. Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking. Samuel shook his head. "I don't know." "We can't just let six of us die," she said, her voice rising in panic. "There has to be another way." He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?" Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up." Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic." She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic." They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic." Samuel raised an eyebrow. "What?" "We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface." He stared at her in disbelief. "You can't be serious." "It's not ideal," she admitted. "But it's better than just letting them die down here." Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have? As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor. What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls. Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel. And all he could do was watch as the world above claimed them, one by one. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods. ### ASSISTANT Chapter One: The Golden Veil The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune. In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely. His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything. Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within. Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner. One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world. "Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?" The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion. "No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me." He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within. "I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it." The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box." As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for. The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness. But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher. Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones. "I must see this box for myself," he said, his voice barely above a whisper. "Take me to it." The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels. There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity. As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls. A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe." Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe. "But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds." And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean? As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him! He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety. And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself. But little did he know that the true mystery was only just beginning... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them. ### ASSISTANT Chapter 1 The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live. The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light. I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line." I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them. A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth. I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness. When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming." And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step. I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there... But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction. "Halt!" one of them yelled, his voice harsh and commanding. "Hands up!" I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him. "Run," he whispered, his voice barely audible. "Leave me." I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything. But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat. "Nein," I whispered, my voice choked with grief and rage. "Nein!" I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us. With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness. As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me. But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform. ### ASSISTANT In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel. His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work. A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips. The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface. Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival. "You're late," Chigurh said finally, his voice low and gravelly. The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic." Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement. Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket. "You know what needs to be done," the man in the suit said, his voice barely above a whisper. Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress. Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary. He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose. Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die. He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction. As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move. Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw. </details> Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)!
{"license": "other"}
dataset
null
553
Thang203/general_trump_tweets
Thang203
text-classification
[ "bertopic", "text-classification", "region:us" ]
2024-05-22T16:34:37Z
2024-05-22T16:34:44+00:00
4
0
--- library_name: bertopic pipeline_tag: text-classification tags: - bertopic --- # general_trump_tweets This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("Thang203/general_trump_tweets") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 647 * Number of training documents: 56571 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | president - realdonaldtrump - rt - trump - vote | 10 | -1_president_realdonaldtrump_rt_trump | | 0 | thanks - good luck - luck - true - billmaher | 20734 | 0_thanks_good luck_luck_true | | 1 | httpstcoz0i7wbsgtp - httpstcoggwnkrgz9u - httpstcoknvqf6jdil - httpstcobc2h4ozhqp - httpstcofduvk8cm9s | 1814 | 1_httpstcoz0i7wbsgtp_httpstcoggwnkrgz9u_httpstcoknvqf6jdil_httpstcobc2h4ozhqp | | 2 | inspiration - thanks - man - role - role model | 1296 | 2_inspiration_thanks_man_role | | 3 | china - chinese - tariffs - chinas - currency | 755 | 3_china_chinese_tariffs_chinas | | 4 | obamacare - healthcare - repeal - premiums - replace | 680 | 4_obamacare_healthcare_repeal_premiums | | 5 | biden - joe biden - joe - bidens - sleepy joe biden | 376 | 5_biden_joe biden_joe_bidens | | 6 | tax - cuts - tax cuts - taxes - tax cut | 345 | 6_tax_cuts_tax cuts_taxes | | 7 | realdonaldtrump foxandfriends - foxandfriends - foxnews - realdonaldtrump foxnews - belllabooo13 | 316 | 7_realdonaldtrump foxandfriends_foxandfriends_foxnews_realdonaldtrump foxnews | | 8 | hillary - crooked hillary - crooked - hillary clinton - clinton | 316 | 8_hillary_crooked hillary_crooked_hillary clinton | | 9 | iran - nuclear - iranian - sanctions - irans | 314 | 9_iran_nuclear_iranian_sanctions | | 10 | hotel - luxury - tower - restaurant - rooms | 298 | 10_hotel_luxury_tower_restaurant | | 11 | golf - course - golf course - golf club - club | 289 | 11_golf_course_golf course_golf club | | 12 | pelosi - nancy - nancy pelosi - crazy nancy - speaker | 277 | 12_pelosi_nancy_nancy pelosi_crazy nancy | | 13 | veterans - heroes - honor - memorial day - memorial | 277 | 13_veterans_heroes_honor_memorial day | | 14 | impeachment - impeach - articles - democrats - articles impeachment | 275 | 14_impeachment_impeach_articles_democrats | | 15 | rt realdonaldtrump - rt - realdonaldtrump - rt realdonaldtrump thank - realdonaldtrump thank | 272 | 15_rt realdonaldtrump_rt_realdonaldtrump_rt realdonaldtrump thank | | 16 | run president - run - realdonaldtrump run - realdonaldtrump run president - donald run | 269 | 16_run president_run_realdonaldtrump run_realdonaldtrump run president | | 17 | hurricane - nhcatlantic - fema - storm - local | 268 | 17_hurricane_nhcatlantic_fema_storm | | 18 | ballots - mailin - ballot - fraud - voting | 268 | 18_ballots_mailin_ballot_fraud | | 19 | portland - anarchists - protesters - peaceful - mayor | 257 | 19_portland_anarchists_protesters_peaceful | | 20 | wind - turbines - wind turbines - alexsalmond - ugly | 244 | 20_wind_turbines_wind turbines_alexsalmond | | 21 | total endorsement - congressman - endorsement - complete total endorsement - complete total | 237 | 21_total endorsement_congressman_endorsement_complete total endorsement | | 22 | korea - north korea - north - kim - kim jong | 230 | 22_korea_north korea_north_kim | | 23 | barackobama - obama - president obama - barackobamas - jimmy carter | 228 | 23_barackobama_obama_president obama_barackobamas | | 24 | interviewed - enjoy interviewed - enjoy - interviewed foxandfriends - interviewed seanhannity | 226 | 24_interviewed_enjoy interviewed_enjoy_interviewed foxandfriends | | 25 | donald trump - donald - newsmaxmedia - cpac - trump speak | 221 | 25_donald trump_donald_newsmaxmedia_cpac | | 26 | realdonaldtrump president - trump president - realdonaldtrump trump - realdonaldtrump trump president - realdonaldtrump president 2016 | 217 | 26_realdonaldtrump president_trump president_realdonaldtrump trump_realdonaldtrump trump president | | 27 | apprenticenbc - apprenticenbc realdonaldtrump - realdonaldtrump apprenticenbc - brandiglanville - brandiglanville apprenticenbc | 209 | 27_apprenticenbc_apprenticenbc realdonaldtrump_realdonaldtrump apprenticenbc_brandiglanville | | 28 | ukraine - ukrainian - quid - pro quo - quid pro quo | 207 | 28_ukraine_ukrainian_quid_pro quo | | 29 | wall - border - built - build - southern border | 204 | 29_wall_border_built_build | | 30 | israel - netanyahu - jerusalem - omar - peace | 203 | 30_israel_netanyahu_jerusalem_omar | | 31 | poll - poll trump - carson - reuters - leads | 201 | 31_poll_poll trump_carson_reuters | | 32 | celebrityapprentice - celebapprentice - realdonaldtrump celebrityapprentice - season celebrityapprentice - season | 200 | 32_celebrityapprentice_celebapprentice_realdonaldtrump celebrityapprentice_season celebrityapprentice | | 33 | apprentice - celebrity apprentice - celebrity - season - realdonaldtrump celebrity | 198 | 33_apprentice_celebrity apprentice_celebrity_season | | 34 | trump2016 - realdonaldtrump trump2016 - trump2016 realdonaldtrump - trump16 - president trump2016 | 195 | 34_trump2016_realdonaldtrump trump2016_trump2016 realdonaldtrump_trump16 | | 35 | scotland - turnberry - golf - course - golf course | 193 | 35_scotland_turnberry_golf_course | | 36 | think like - think like champion - like champion - champion - think big | 187 | 36_think like_think like champion_like champion_champion | | 37 | miss - pageant - miss universe - universe - miss usa | 185 | 37_miss_pageant_miss universe_universe | | 38 | fake news - media - fake - news media - fake news media | 185 | 38_fake news_media_fake_news media | | 39 | chicago - building - sign - tower - tower chicago | 176 | 39_chicago_building_sign_tower | | 40 | economy - gdp - economic - growth - numbers | 173 | 40_economy_gdp_economic_growth | | 41 | nytimes - failing nytimes - failing - new york times - york times | 164 | 41_nytimes_failing nytimes_failing_new york times | | 42 | thank - thank matt - maria - eric - tammy | 159 | 42_thank_thank matt_maria_eric | | 43 | jeb - jeb bush - bush - jebbush - jebs | 158 | 43_jeb_jeb bush_bush_jebbush | | 44 | schiff - adam - adam schiff - shifty - schiffs | 155 | 44_schiff_adam_adam schiff_shifty | | 45 | entrepreneurs - momentum - young entrepreneurs - youre doing - entrepreneurs dont | 155 | 45_entrepreneurs_momentum_young entrepreneurs_youre doing | | 46 | supreme - supreme court - court - judges - justices | 153 | 46_supreme_supreme court_court_judges | | 47 | rt erictrump - erictrump - kimstrassel - rt marklevinshow - marklevinshow | 147 | 47_rt erictrump_erictrump_kimstrassel_rt marklevinshow | | 48 | fed - federal reserve - reserve - rates - inflation | 146 | 48_fed_federal reserve_reserve_rates | | 49 | discussing - interview discussing - interview - squawkcnbc - squawkcnbc interview | 144 | 49_discussing_interview discussing_interview_squawkcnbc | | 50 | cnn - ratings - msnbc - news - wow cnn | 143 | 50_cnn_ratings_msnbc_news | | 51 | hotel - vegas - stayed - las - las vegas | 142 | 51_hotel_vegas_stayed_las | | 52 | thank - thank working - thank working hard - working hard - working hard thank | 139 | 52_thank_thank working_thank working hard_working hard | | 53 | gas - opec - prices - oil - gas prices | 133 | 53_gas_opec_prices_oil | | 54 | focus - passion - goals - success - youre doing | 133 | 54_focus_passion_goals_success | | 55 | interview - great interview - realdonaldtrump great interview - interview realdonaldtrump - realdonaldtrump great | 129 | 55_interview_great interview_realdonaldtrump great interview_interview realdonaldtrump | | 56 | comey - james comey - james - mccabe - fbi | 125 | 56_comey_james comey_james_mccabe | | 57 | flynn - general flynn - michael flynn - general - general michael | 125 | 57_flynn_general flynn_michael flynn_general | | 58 | makeamericagreatagain - realdonaldtrump makeamericagreatagain - trump2016 makeamericagreatagain - keksecorg realdonaldtrump - keksecorg | 124 | 58_makeamericagreatagain_realdonaldtrump makeamericagreatagain_trump2016 makeamericagreatagain_keksecorg realdonaldtrump | | 59 | acting - secretary - pleased - pleased announce - director | 121 | 59_acting_secretary_pleased_pleased announce | | 60 | immigration - border - loopholes - open borders - laws | 119 | 60_immigration_border_loopholes_open borders | | 61 | mueller - mueller report - report - obstruction - collusion | 119 | 61_mueller_mueller report_report_obstruction | | 62 | pennsylvania - thank pennsylvania - thank arizona - arizona - thank | 118 | 62_pennsylvania_thank pennsylvania_thank arizona_arizona | | 63 | senategop - rt senategop - senate - senatemajldr - senjohnbarrasso | 118 | 63_senategop_rt senategop_senate_senatemajldr | | 64 | hispanics - latinos - immigration - illegals - illegal | 115 | 64_hispanics_latinos_immigration_illegals | | 65 | ties - tie - macys - shirts - trump tie | 115 | 65_ties_tie_macys_shirts | | 66 | realdonaldtrump make america great - realdonaldtrump make america - america great - america great rt - realdonaldtrump make | 111 | 66_realdonaldtrump make america great_realdonaldtrump make america_america great_america great rt | | 67 | syria - attack syria - rebels - syrian - assad | 110 | 67_syria_attack syria_rebels_syrian | | 68 | obama - need obama - obamas - realdonaldtrump obama - mess obama | 108 | 68_obama_need obama_obamas_realdonaldtrump obama | | 69 | romney - mitt romney - mitt - stevens - manchin | 106 | 69_romney_mitt romney_mitt_stevens | | 70 | democrats - gopchairwoman - democratic - rt gopchairwoman - democrat | 105 | 70_democrats_gopchairwoman_democratic_rt gopchairwoman | | 71 | debt - budget - barackobama - deficit - national debt | 105 | 71_debt_budget_barackobama_deficit | | 72 | tariffs - trade - barriers - farmers - products | 104 | 72_tariffs_trade_barriers_farmers | | 73 | coronavirus - coronavirus task force - coronavirus task - task force - press briefing | 101 | 73_coronavirus_coronavirus task force_coronavirus task_task force | | 74 | truth - speaks truth - handle truth - speaks - agree | 101 | 74_truth_speaks truth_handle truth_speaks | | 75 | rally - crowd - big rally - big crowd - carolina | 101 | 75_rally_crowd_big rally_big crowd | | 76 | iowa - caucus - des moines - moines - des | 97 | 76_iowa_caucus_des moines_moines | | 77 | schools - school - education - students - children | 96 | 77_schools_school_education_students | | 78 | great make america - great make - america great make - america great make america - make america great make | 96 | 78_great make america_great make_america great make_america great make america | | 79 | thank - thank marklevinshow - thank rep - senatordole - thank loudobbs | 95 | 79_thank_thank marklevinshow_thank rep_senatordole | | 80 | stock market - stock - market - high - alltime high | 95 | 80_stock market_stock_market_high | | 81 | lawsuit - sue - fees - legal fees - trump university | 95 | 81_lawsuit_sue_fees_legal fees | | 82 | rove - karl - karlrove - karl rove - ashley | 92 | 82_rove_karl_karlrove_karl rove | | 83 | polls - poll - suppression - fake - foxnews polls | 90 | 83_polls_poll_suppression_fake | | 84 | tweets - tweet - twitter - realdonaldtrump tweets - realdonaldtrump twitter | 90 | 84_tweets_tweet_twitter_realdonaldtrump tweets | | 85 | charity - fundanything - billmaher - million - away money | 90 | 85_charity_fundanything_billmaher_million | | 86 | usmca - trade deal - agreement - trade - manufacturers | 89 | 86_usmca_trade deal_agreement_trade | | 87 | mayor - nypd - nyc - yorks - new yorks | 89 | 87_mayor_nypd_nyc_yorks | | 88 | new hampshire - hampshire - thank new hampshire - thank new - fitn | 89 | 88_new hampshire_hampshire_thank new hampshire_thank new | | 89 | join - tickets - tomorrow - 3pm - rapids | 88 | 89_join_tickets_tomorrow_3pm | | 90 | joan - joanrivers - rivers - melrivers - apprenticenbc | 87 | 90_joan_joanrivers_rivers_melrivers | | 91 | ebola - africa - west africa - flights - infected | 87 | 91_ebola_africa_west africa_flights | | 92 | collusion - russian - russian collusion - intelligence committee - russia | 86 | 92_collusion_russian_russian collusion_intelligence committee | | 93 | debate - debates - won debate - won - drudge | 85 | 93_debate_debates_won debate_won | | 94 | celebrity apprentice - apprentice - celebrity - episode - pm nbc | 84 | 94_celebrity apprentice_apprentice_celebrity_episode | | 95 | emails - 33000 - deleted - 33000 emails - clinton | 83 | 95_emails_33000_deleted_33000 emails | | 96 | paycheckprotectionprogram - paycheck - paycheck protection - program - protection | 82 | 96_paycheckprotectionprogram_paycheck_paycheck protection_program | | 97 | rubio - marco - marco rubio - lightweight senator - senator marco | 78 | 97_rubio_marco_marco rubio_lightweight senator | | 98 | dannyzuker - danny - zuker - dannyzuker realdonaldtrump - realdonaldtrump dannyzuker | 78 | 98_dannyzuker_danny_zuker_dannyzuker realdonaldtrump | | 99 | nflcommish - buffalobills - bills - buffalo - owner | 77 | 99_nflcommish_buffalobills_bills_buffalo | | 100 | melaniatrump - lady - ivanka - ivankatrump - ivanka trump | 76 | 100_melaniatrump_lady_ivanka_ivankatrump | | 101 | oscars - academy - awards - ellen - actor | 76 | 101_oscars_academy_awards_ellen | | 102 | fisa - fbi - fisa court - horowitz - carter page | 75 | 102_fisa_fbi_fisa court_horowitz | | 103 | mini - mini mike - bloomberg - mike - mini mike bloomberg | 75 | 103_mini_mini mike_bloomberg_mike | | 104 | twitter - tech - facebook - conservatives - big tech | 75 | 104_twitter_tech_facebook_conservatives | | 105 | unemployment - unemployment rate - rate - real unemployment - labor | 75 | 105_unemployment_unemployment rate_rate_real unemployment | | 106 | rt whitehouse live president realdonaldtrump - whitehouse live president realdonaldtrump - live president realdonaldtrump - whitehouse live president - rt whitehouse live president | 75 | 106_rt whitehouse live president realdonaldtrump_whitehouse live president realdonaldtrump_live president realdonaldtrump_whitehouse live president | | 107 | snl - nbcsnl - saturday night live - night live - saturday night | 75 | 107_snl_nbcsnl_saturday night live_night live | | 108 | whistleblower - whistleblowers - fake whistleblower - whistleblower disappeared - second hand | 73 | 108_whistleblower_whistleblowers_fake whistleblower_whistleblower disappeared | | 109 | maga - soon maga - pennsylvania maga - pennsylvania - tonight maga | 73 | 109_maga_soon maga_pennsylvania maga_pennsylvania | | 110 | great honor - honor - honor thank - great honor thank - great people | 73 | 110_great honor_honor_honor thank_great honor thank | | 111 | police - defund - defund police - want defund - police departments | 73 | 111_police_defund_defund police_want defund | | 112 | rally - trump rally - las vegas - las - overflow | 73 | 112_rally_trump rally_las vegas_las | | 113 | wwe - wrestlemania - wwe hall - hall fame - fame | 73 | 113_wwe_wrestlemania_wwe hall_hall fame | | 114 | bus - funds - improvements - funding - area | 72 | 114_bus_funds_improvements_funding | | 115 | maga - thank maga - great news maga - news maga - great news | 72 | 115_maga_thank maga_great news maga_news maga | | 116 | ivanka - fathers day - father - daughter ivanka - fathers | 71 | 116_ivanka_fathers day_father_daughter ivanka | | 117 | makeamericagreatagain trump2016 - makeamericagreatagain - trump2016 - trump2016 makeamericagreatagain - trump trump2016 | 71 | 117_makeamericagreatagain trump2016_makeamericagreatagain_trump2016_trump2016 makeamericagreatagain | | 118 | approval rating - rating republican party - rating republican - approval rating republican - approval rating republican party | 70 | 118_approval rating_rating republican party_rating republican_approval rating republican | | 119 | america great - make america great - make america - making america great - making america | 70 | 119_america great_make america great_make america_making america great | | 120 | wisconsin - michigan - thank michigan - foxconn - wisconsin vote | 69 | 120_wisconsin_michigan_thank michigan_foxconn | | 121 | ohio - thank ohio - indiana - cincinnati - dayton | 69 | 121_ohio_thank ohio_indiana_cincinnati | | 122 | cuomo - governor cuomo - andrew cuomo - new york - york | 69 | 122_cuomo_governor cuomo_andrew cuomo_new york | | 123 | france - g7 - emmanuelmacron - president emmanuelmacron - angela merkel | 69 | 123_france_g7_emmanuelmacron_president emmanuelmacron | | 124 | rt danscavino - danscavino - rt - nadler - ocean city maryland | 69 | 124_rt danscavino_danscavino_rt_nadler | | 125 | isis - caliphate - isis fighters - fighters - prisoners | 69 | 125_isis_caliphate_isis fighters_fighters | | 126 | carnegie - waldo emerson - waldo - emerson - edison | 68 | 126_carnegie_waldo emerson_waldo_emerson | | 127 | congresswoman - total endorsement - endorsement - complete - complete total endorsement | 68 | 127_congresswoman_total endorsement_endorsement_complete | | 128 | birthday - happy birthday - happy - birthday great - happy birthday great | 67 | 128_birthday_happy birthday_happy_birthday great | | 129 | thank trump2016 - trump2016 - thank america - thank america trump2016 - america trump2016 | 66 | 129_thank trump2016_trump2016_thank america_thank america trump2016 | | 130 | daca - make deal - march 5th - deal - recipients | 66 | 130_daca_make deal_march 5th_deal | | 131 | yankees - derek - jeter - derek jeter - baseball | 66 | 131_yankees_derek_jeter_derek jeter | | 132 | art deal - art - deal - books - book | 66 | 132_art deal_art_deal_books | | 133 | japan - abe - prime minister - prime - minister | 66 | 133_japan_abe_prime minister_prime | | 134 | prayers - thoughts prayers - condolences - families - thoughts | 65 | 134_prayers_thoughts prayers_condolences_families | | 135 | rosie - rosie odonnell - odonnell - view - theviewtv | 65 | 135_rosie_rosie odonnell_odonnell_view | | 136 | happy birthday - birthday - happy - ritter1025 - wishing | 65 | 136_happy birthday_birthday_happy_ritter1025 | | 137 | books - book - reading - thanks good luck - read | 65 | 137_books_book_reading_thanks good luck | | 138 | million jobs - added - jobs - jobs created - jobs added | 65 | 138_million jobs_added_jobs_jobs created | | 139 | gun - nra - guns - rt nra - gun control | 63 | 139_gun_nra_guns_rt nra | | 140 | italy - prime minister - prime - minister - honor welcome | 63 | 140_italy_prime minister_prime_minister | | 141 | putin - russia - vladimir - vladimir putin - tougher russia | 63 | 141_putin_russia_vladimir_vladimir putin | | 142 | weiner - pervert - anthony weiner - anthony - sexting | 63 | 142_weiner_pervert_anthony weiner_anthony | | 143 | iowa - iowa great - crowd - crowds - iowa today | 63 | 143_iowa_iowa great_crowd_crowds | | 144 | georgia - briankempga - verification - signature - signatures | 60 | 144_georgia_briankempga_verification_signature | | 145 | makeamericagreatagain - gopdebate - thank makeamericagreatagain - deplorables - twitter account | 60 | 145_makeamericagreatagain_gopdebate_thank makeamericagreatagain_deplorables | | 146 | cruz - ted cruz - ted - goldman - dirty tricks | 60 | 146_cruz_ted cruz_ted_goldman | | 147 | speech - summit - thefamilyleader - citizensunited - reviews | 60 | 147_speech_summit_thefamilyleader_citizensunited | | 148 | midas touch - midas - touch - brand - entrepreneurs | 60 | 148_midas touch_midas_touch_brand | | 149 | foxandfriends monday - 730 - foxandfriends - 700 - foxandfriends 700 | 59 | 149_foxandfriends monday_730_foxandfriends_700 | | 150 | loved - best tv - tv - realdonaldtrump loved - realdonaldtrump meetthepress | 59 | 150_loved_best tv_tv_realdonaldtrump loved | | 151 | plane - boeing - airports - batteries - f35 | 59 | 151_plane_boeing_airports_batteries | | 152 | omarosa - celebapprentice - latoyajackson - omarosas - merger | 59 | 152_omarosa_celebapprentice_latoyajackson_omarosas | | 153 | ford - plant - general motors - motors - car | 58 | 153_ford_plant_general motors_motors | | 154 | college - applications - records - college records - barackobamas | 58 | 154_college_applications_records_college records | | 155 | letterman - david letterman - david - did awesome - realdonaldtrump did awesome | 58 | 155_letterman_david letterman_david_did awesome | | 156 | mexico - southern border - border - southern - honduras | 58 | 156_mexico_southern border_border_southern | | 157 | post office - old post - old post office - post - old | 58 | 157_post office_old post_old post office_post | | 158 | medical - health - supplies - workers - nurses | 57 | 158_medical_health_supplies_workers | | 159 | flotus - rt flotus - realdonaldtrump flotus - president realdonaldtrump flotus - flotus thank | 57 | 159_flotus_rt flotus_realdonaldtrump flotus_president realdonaldtrump flotus | | 160 | piersmorgan - piers - rt piersmorgan - piers morgan - morgan | 57 | 160_piersmorgan_piers_rt piersmorgan_piers morgan | | 161 | mccain - john mccain - john - fan john - kpdelbridge | 56 | 161_mccain_john mccain_john_fan john | | 162 | churchill - winston churchill - winston - courage - abraham | 56 | 162_churchill_winston churchill_winston_courage | | 163 | mikeandmike - frankcaliendo - mikeandmike realdonaldtrump - frankcaliendo mikeandmike - frankcaliendo mikeandmike realdonaldtrump | 56 | 163_mikeandmike_frankcaliendo_mikeandmike realdonaldtrump_frankcaliendo mikeandmike | | 164 | fraud - voter fraud - voter - election fraud - election | 55 | 164_fraud_voter fraud_voter_election fraud | | 165 | barackobama - fundraisers - vacation - habitual - dime | 55 | 165_barackobama_fundraisers_vacation_habitual | | 166 | brady - tom brady - tom - quarterback - bob kraft | 55 | 166_brady_tom brady_tom_quarterback | | 167 | celebapprentice - thegarybusey - boardroom - decision celebapprentice - celebapprentice thegarybusey | 55 | 167_celebapprentice_thegarybusey_boardroom_decision celebapprentice | | 168 | afghanistan - taliban - afghan - karzai - soldiers | 55 | 168_afghanistan_taliban_afghan_karzai | | 169 | egypt - brotherhood - muslim brotherhood - muslim - mubarak | 55 | 169_egypt_brotherhood_muslim brotherhood_muslim | | 170 | arod - yankees - contract - mlb - drugs | 55 | 170_arod_yankees_contract_mlb | | 171 | approval - approval rating - rating - approval rating republican - approval rating republican party | 55 | 171_approval_approval rating_rating_approval rating republican | | 172 | birthday - happy birthday - happy - realdonaldtrump happy - bday | 55 | 172_birthday_happy birthday_happy_realdonaldtrump happy | | 173 | great book - book - new book - book great - httpstcofg7yxock9r | 54 | 173_great book_book_new book_book great | | 174 | prayer - day prayer - franklingraham - national day - national day prayer | 54 | 174_prayer_day prayer_franklingraham_national day | | 175 | president vote - vote realdonaldtrump - vote - realdonaldtrump run president - realdonaldtrump run | 54 | 175_president vote_vote realdonaldtrump_vote_realdonaldtrump run president | | 176 | signing - crippled - crippled america - copies - book signing | 54 | 176_signing_crippled_crippled america_copies | | 177 | ufc - serenawilliams - boxing - floyd - fight | 53 | 177_ufc_serenawilliams_boxing_floyd | | 178 | puerto - puerto rico - rico - relief - grid | 53 | 178_puerto_puerto rico_rico_relief | | 179 | louisiana - eddierispone - edwards - bel edwards - john bel | 53 | 179_louisiana_eddierispone_edwards_bel edwards | | 180 | seanhannity - hannity - hannityshow - levin - tonight | 52 | 180_seanhannity_hannity_hannityshow_levin | | 181 | true - true thanks - httpstcoxvsyuvh1bn - true work httpstcouei4u4lpts - true number | 52 | 181_true_true thanks_httpstcoxvsyuvh1bn_true work httpstcouei4u4lpts | | 182 | virus - china virus - china - chinese virus - coronavirus | 52 | 182_virus_china virus_china_chinese virus | | 183 | new book - book - great new book - great new - copy today | 52 | 183_new book_book_great new book_great new | | 184 | kavanaugh - brett kavanaugh - brett - judge brett - judge brett kavanaugh | 51 | 184_kavanaugh_brett kavanaugh_brett_judge brett | | 185 | nasa - space - spacex - astronauts - launch | 51 | 185_nasa_space_spacex_astronauts | | 186 | amazing - watch httpstcopyoiljm0pz - httpstcopyoiljm0pz - great watch httpstcopyoiljm0pz - great news thank | 50 | 186_amazing_watch httpstcopyoiljm0pz_httpstcopyoiljm0pz_great watch httpstcopyoiljm0pz | | 187 | march 3rd - march - 3rd - celebapprentice - traceadkins | 50 | 187_march 3rd_march_3rd_celebapprentice | | 188 | greta - gretawire - greta van - 10 pm - gretawire tonight | 50 | 188_greta_gretawire_greta van_10 pm | | 189 | dow - nasdaq - sampp - sampp 500 - 500 | 49 | 189_dow_nasdaq_sampp_sampp 500 | | 190 | kasich - john kasich - ohio - john - negative ads | 49 | 190_kasich_john kasich_ohio_john | | 191 | mittromney - mitt - aggressive - know mittromney - mittromney just | 49 | 191_mittromney_mitt_aggressive_know mittromney | | 192 | venezuela - cuba - democracy - cuban - chavez | 49 | 192_venezuela_cuba_democracy_cuban | | 193 | went bankrupt - bankrupt - bankruptcy - went - buffett | 48 | 193_went bankrupt_bankrupt_bankruptcy_went | | 194 | estate - real estate - time buy - buy - buyers | 48 | 194_estate_real estate_time buy_buy | | 195 | ireland - doonbeg - trumpireland - atlantic ocean - links | 48 | 195_ireland_doonbeg_trumpireland_atlantic ocean | | 196 | dems - bad dems - dems change - american legion - legion | 47 | 196_dems_bad dems_dems change_american legion | | 197 | tomfitton - rt tomfitton - coup - rt tomfitton coup - tomfitton coup | 47 | 197_tomfitton_rt tomfitton_coup_rt tomfitton coup | | 198 | turkey - kurds - syria - ceasefire - idlib | 47 | 198_turkey_kurds_syria_ceasefire | | 199 | iraq - oil - oil iraq - trillion - waste lives | 47 | 199_iraq_oil_oil iraq_trillion | | 200 | website - obamacare website - obamacare - 1b - 1b obamacare | 47 | 200_website_obamacare website_obamacare_1b | | 201 | stewart - jon stewart - jon - thedailyshow - overrated | 47 | 201_stewart_jon stewart_jon_thedailyshow | | 202 | trump signature - signature collection - trump signature collection - collection - available macys | 47 | 202_trump signature_signature collection_trump signature collection_collection | | 203 | warming - global warming - global - freezing - coldest | 47 | 203_warming_global warming_global_freezing | | 204 | run 2016 - 2016 - president 2016 - run president 2016 - running 2016 | 46 | 204_run 2016_2016_president 2016_run president 2016 | | 205 | selffunding - lobbyists - special interests - interests - self funding | 46 | 205_selffunding_lobbyists_special interests_interests | | 206 | economy - economy history - greatest economy - hourly - wages | 46 | 206_economy_economy history_greatest economy_hourly | | 207 | covid19 - rt cdcgov - cdcgov - spread covid19 - spread | 46 | 207_covid19_rt cdcgov_cdcgov_spread covid19 | | 208 | fox amp friends - fox amp - amp friends - doing fox - friends | 46 | 208_fox amp friends_fox amp_amp friends_doing fox | | 209 | fracking - ban - energy - pennsylvania - pennsylvania vote | 46 | 209_fracking_ban_energy_pennsylvania | | 210 | wow - agree - agree 100 - fully agree - wow unbelievable | 45 | 210_wow_agree_agree 100_fully agree | | 211 | veterans - marine - vet - veterans affairs - affairs | 45 | 211_veterans_marine_vet_veterans affairs | | 212 | sternshow - howardstern - howard - stern - howard stern | 45 | 212_sternshow_howardstern_howard_stern | | 213 | pence - mike pence - mikepence - rt mikepence - mike | 45 | 213_pence_mike pence_mikepence_rt mikepence | | 214 | elizabeth - elizabeth warren - warren - pocahontas - goofy | 45 | 214_elizabeth_elizabeth warren_warren_pocahontas | | 215 | bernie - bernie sanders - sanders - crazy bernie - crazy bernie sanders | 45 | 215_bernie_bernie sanders_sanders_crazy bernie | | 216 | barrett - amy - judge - judge barrett - supreme | 44 | 216_barrett_amy_judge_judge barrett | | 217 | schneiderman - eric schneiderman - ag eric schneiderman - ag eric - eric | 44 | 217_schneiderman_eric schneiderman_ag eric schneiderman_ag eric | | 218 | charity - 5m - offer - 5m charity - million charity | 44 | 218_charity_5m_offer_5m charity | | 219 | baltimore - elijah - cummings - elijah cummings - ferguson | 43 | 219_baltimore_elijah_cummings_elijah cummings | | 220 | sign - love sign - sign looks - looks great - sign looks great | 43 | 220_sign_love sign_sign looks_looks great | | 221 | doral - national doral - trump national doral - trump national - trump doral | 42 | 221_doral_national doral_trump national doral_trump national | | 222 | hunter - hunter biden - biden - wheres hunter - joe | 42 | 222_hunter_hunter biden_biden_wheres hunter | | 223 | keystone - pipeline - xl - keystone xl - keystone pipeline | 42 | 223_keystone_pipeline_xl_keystone xl | | 224 | art deal - art - worst thing possibly deal desperate - possibly deal - possibly deal desperate | 42 | 224_art deal_art_worst thing possibly deal desperate_possibly deal | | 225 | russia - russian - russia hoax - russia russia - hoax | 41 | 225_russia_russian_russia hoax_russia russia | | 226 | witnesses - testimony - witness - witnesses house - chose | 41 | 226_witnesses_testimony_witness_witnesses house | | 227 | news conference - conference - white house news conference - house news conference - white house news | 41 | 227_news conference_conference_white house news conference_house news conference | | 228 | spying - spy - intelligence - surveillance - operation | 41 | 228_spying_spy_intelligence_surveillance | | 229 | football - world cup - cup - national champions - tigers | 41 | 229_football_world cup_cup_national champions | | 230 | makeamericagreatagain - thank support - makeamericagreatagain trump2016 - thank america - trump2016makeamericagreatagain | 40 | 230_makeamericagreatagain_thank support_makeamericagreatagain trump2016_thank america | | 231 | schumer - chuck schumer - chuck - cryin - cryin chuck | 40 | 231_schumer_chuck schumer_chuck_cryin | | 232 | party - party american - party party - republican party party - republican party | 40 | 232_party_party american_party party_republican party party | | 233 | register - register vote - early voting - day register - day register vote | 39 | 233_register_register vote_early voting_day register | | 234 | testing - cases - tests - million tests - country world | 39 | 234_testing_cases_tests_million tests | | 235 | benghazi - consulate - attack - benghazi terrorist - attack benghazi | 39 | 235_benghazi_consulate_attack_benghazi terrorist | | 236 | farmers - secretarysonny - rt secretarysonny - ranchers - supply | 39 | 236_farmers_secretarysonny_rt secretarysonny_ranchers | | 237 | rebuilding - years rebuilding nations finally rebuilding - nations finally rebuilding nation - rebuilding nations finally rebuilding nation - finally rebuilding nation | 39 | 237_rebuilding_years rebuilding nations finally rebuilding_nations finally rebuilding nation_rebuilding nations finally rebuilding nation | | 238 | snowden - traitor - snowden traitor - edward snowden - edward | 38 | 238_snowden_traitor_snowden traitor_edward snowden | | 239 | drug - drug prices - prices - drug companies - pharma | 38 | 239_drug_drug prices_prices_drug companies | | 240 | rigged election - rigged - election - election rigged - election rigged election | 38 | 240_rigged election_rigged_election_election rigged | | 241 | tigerwoods - tiger - dustin - majors - themasters | 38 | 241_tigerwoods_tiger_dustin_majors | | 242 | gopchairwoman - rt gopchairwoman - gopchairwoman realdonaldtrump - rt gopchairwoman realdonaldtrump - gopchairwoman mainstream media | 37 | 242_gopchairwoman_rt gopchairwoman_gopchairwoman realdonaldtrump_rt gopchairwoman realdonaldtrump | | 243 | volunteer trump - volunteer trump election - trump election - poll watcher sign - election poll watcher | 37 | 243_volunteer trump_volunteer trump election_trump election_poll watcher sign | | 244 | energy - american energy - coal - electricity - energy industry | 37 | 244_energy_american energy_coal_electricity | | 245 | luther - alabama - strange - roy - roy moore | 37 | 245_luther_alabama_strange_roy | | 246 | kenya - kenyamoore - leezagibbons - ianziering - kenyamoore realdonaldtrump | 36 | 246_kenya_kenyamoore_leezagibbons_ianziering | | 247 | dbongino - firefighters - fireman - union - dues | 36 | 247_dbongino_firefighters_fireman_union | | 248 | small - small businesses - businesses - small business - business | 36 | 248_small_small businesses_businesses_small business | | 249 | anthem - players - nfl - national anthem - disrespect | 36 | 249_anthem_players_nfl_national anthem | | 250 | florida - tampa - tampa florida - thank florida - florida thank | 36 | 250_florida_tampa_tampa florida_thank florida | | 251 | businessman - businessman run - business man - business - run country | 36 | 251_businessman_businessman run_business man_business | | 252 | rasmussen - approval - rasmussenpoll - rasmussen poll - approval rating | 36 | 252_rasmussen_approval_rasmussenpoll_rasmussen poll | | 253 | magazine - time magazine - newsweek - york magazine - new york magazine | 35 | 253_magazine_time magazine_newsweek_york magazine | | 254 | sanctuary - sanctuary cities - cities - californias - criminals | 35 | 254_sanctuary_sanctuary cities_cities_californias | | 255 | crisis - border - security crisis - border crisis - southern border | 35 | 255_crisis_border_security crisis_border crisis | | 256 | tzu - sun - macarthur - patton - douglas macarthur | 34 | 256_tzu_sun_macarthur_patton | | 257 | apple - screen - iphone - larger - samsung | 34 | 257_apple_screen_iphone_larger | | 258 | theory - critical - race - federal agencies - antiamerican | 34 | 258_theory_critical_race_federal agencies | | 259 | strzok - peter - peter strzok - lisa page - lover | 34 | 259_strzok_peter_peter strzok_lisa page | | 260 | flotus melania - melania - flotus - melania honored - flotus melania honored | 34 | 260_flotus melania_melania_flotus_melania honored | | 261 | jobs jobs - jobs jobs jobs - jobs - jobsnotmobs - jobs maga | 34 | 261_jobs jobs_jobs jobs jobs_jobs_jobsnotmobs | | 262 | 13th season - 13th - 13th season star - season star - 13th season star celebapprentice | 33 | 262_13th season_13th_13th season star_season star | | 263 | maga - rt realdonaldtrump thank - thank maga - realdonaldtrump thank - realdonaldtrump maga | 33 | 263_maga_rt realdonaldtrump thank_thank maga_realdonaldtrump thank | | 264 | antifa - antifa scum - ran hills - attacking people - rt mrandyngo | 33 | 264_antifa_antifa scum_ran hills_attacking people | | 265 | global warming - warming - global - climate - climate change | 33 | 265_global warming_warming_global_climate | | 266 | shutdown - shut government - shutdowns - border security - democrats open | 33 | 266_shutdown_shut government_shutdowns_border security | | 267 | negotiation - know exactly - persuasion power - think wants - view conflict | 33 | 267_negotiation_know exactly_persuasion power_think wants | | 268 | votes - biden - 75000000 votes - biden got - trump votes | 33 | 268_votes_biden_75000000 votes_biden got | | 269 | demdebate - realdonaldtrump demdebate - boring - debate - poverty | 33 | 269_demdebate_realdonaldtrump demdebate_boring_debate | | 270 | course - ferry point - ferry - realdonaldtrump trumpdoral - realdonaldtrump course | 33 | 270_course_ferry point_ferry_realdonaldtrump trumpdoral | | 271 | megynkelly - crazy megynkelly - overrated - goldberg - ratings | 32 | 271_megynkelly_crazy megynkelly_overrated_goldberg | | 272 | bolton - john bolton - john - dumbest people - boltons | 32 | 272_bolton_john bolton_john_dumbest people | | 273 | consumer - consumer confidence - confidence - highest level - highest | 32 | 273_consumer_consumer confidence_confidence_highest level | | 274 | lesm - law enforcement officers - enforcement officers - law enforcement - enforcement | 32 | 274_lesm_law enforcement officers_enforcement officers_law enforcement | | 275 | carolina - north carolina - north - thank north carolina - thank north | 32 | 275_carolina_north carolina_north_thank north carolina | | 276 | cabinet - meeting - great meeting - cabinet meeting - ceos | 31 | 276_cabinet_meeting_great meeting_cabinet meeting | | 277 | swine - swine flu - flu - h1n1 - h1n1 swine flu | 31 | 277_swine_swine flu_flu_h1n1 | | 278 | opioid - opioid crisis - crisis - prescription - epidemic | 31 | 278_opioid_opioid crisis_crisis_prescription | | 279 | pastor - christian - christian pastor - saeed - abedini | 31 | 279_pastor_christian_christian pastor_saeed | | 280 | lance - armstrong - lancearmstrong - oprah - sued | 31 | 280_lance_armstrong_lancearmstrong_oprah | | 281 | witch hunt - hunt - witch - collusion - russian witch | 31 | 281_witch hunt_hunt_witch_collusion | | 282 | amendment - 2nd amendment - 2nd - save 2nd - save 2nd amendment | 31 | 282_amendment_2nd amendment_2nd_save 2nd | | 283 | join - wichita - 7pme - california tomorrow - morning looking forward | 31 | 283_join_wichita_7pme_california tomorrow | | 284 | rt whitehouse - whitehouse - america wicked - protected - wicked | 31 | 284_rt whitehouse_whitehouse_america wicked_protected | | 285 | weeklyaddress - usstatevisit - memorialday - unga - usa japan httpstcoevxfqavnfs | 31 | 285_weeklyaddress_usstatevisit_memorialday_unga | | 286 | ms13 - ms13 gang - gang members - gang - ice | 31 | 286_ms13_ms13 gang_gang members_gang | | 287 | disgraceful - truth - really sad - disgrace - httpstcozb5xpdks4b | 31 | 287_disgraceful_truth_really sad_disgrace | | 288 | great job fox - job fox - thank foxandfriends - great job - thank foxandfriends great | 31 | 288_great job fox_job fox_thank foxandfriends_great job | | 289 | steel - aluminum - steel industry - industry - mac pro | 30 | 289_steel_aluminum_steel industry_industry | | 290 | think big donald trump - think big donald - big donald trump - big donald - going thinking think big donald | 30 | 290_think big donald trump_think big donald_big donald trump_big donald | | 291 | huffingtonpost - aol - ariannahuff - huffington - arianna | 30 | 291_huffingtonpost_aol_ariannahuff_huffington | | 292 | schiff - adam - adam schiff - impeachment - schiffs | 30 | 292_schiff_adam_adam schiff_impeachment | | 293 | tahmooressi - usmc - sgt - marine - sgt tahmooressi | 30 | 293_tahmooressi_usmc_sgt_marine | | 294 | vaccines - vaccine - therapeutics - usfda - distribution | 30 | 294_vaccines_vaccine_therapeutics_usfda | | 295 | autism - awareness - liub - research - wright | 30 | 295_autism_awareness_liub_research | | 296 | rt realdonaldtrump law - realdonaldtrump law - law amp order rt - amp order rt - order rt | 30 | 296_rt realdonaldtrump law_realdonaldtrump law_law amp order rt_amp order rt | | 297 | hair - drchimrickles - als - wig - realdonaldtrump remember | 30 | 297_hair_drchimrickles_als_wig | | 298 | convention night - watch republican - republican national convention - rt teamtrump watch - teamtrump watch | 30 | 298_convention night_watch republican_republican national convention_rt teamtrump watch | | 299 | steele - christopher steele - dossier - christopher - steeles | 30 | 299_steele_christopher steele_dossier_christopher | | 300 | virginia - gillespie - state virginia - ed - great state virginia | 29 | 300_virginia_gillespie_state virginia_ed | | 301 | rt realdonaldtrump interviewed - realdonaldtrump interviewed - interviewed - enjoy rt - enjoy rt realdonaldtrump | 29 | 301_rt realdonaldtrump interviewed_realdonaldtrump interviewed_interviewed_enjoy rt | | 302 | mandela - nelson mandela - nelson - africa - south africa | 29 | 302_mandela_nelson mandela_nelson_africa | | 303 | god bless usa - bless usa - america america - usa usa - bless | 29 | 303_god bless usa_bless usa_america america_usa usa | | 304 | christmas - merry - merry christmas - christmas realdonaldtrump - want christmas | 29 | 304_christmas_merry_merry christmas_christmas realdonaldtrump | | 305 | paris - france - germany - terror - cemetery | 29 | 305_paris_france_germany_terror | | 306 | radical islamic - islamic - terrorism - radical islamic terrorism - islamic terrorism | 29 | 306_radical islamic_islamic_terrorism_radical islamic terrorism | | 307 | realdonaldtrump vote - vote - vote realdonaldtrump - vote donald - igotangeleyes realdonaldtrump | 29 | 307_realdonaldtrump vote_vote_vote realdonaldtrump_vote donald | | 308 | blue monster - monster - blue - monster trump national doral - blue monster trump national doral | 29 | 308_blue monster_monster_blue_monster trump national doral | | 309 | rt mariabartiromo - mariabartiromo - sundayfutures - morningsmaria foxbusiness - morningsmaria | 29 | 309_rt mariabartiromo_mariabartiromo_sundayfutures_morningsmaria foxbusiness | | 310 | bergdahl - taliban - deserter - exchange - afghanistan | 29 | 310_bergdahl_taliban_deserter_exchange | | 311 | foxandfriends minutes - foxandfriends - foxandfriends enjoy - minutes foxandfriends - minutes | 29 | 311_foxandfriends minutes_foxandfriends_foxandfriends enjoy_minutes foxandfriends | | 312 | doral - miami - doral miami - national doral - trump national doral | 28 | 312_doral_miami_doral miami_national doral | | 313 | silent - silent majority - majority - realdonaldtrump silent - stronger | 28 | 313_silent_silent majority_majority_realdonaldtrump silent | | 314 | unemployment - african - african american - lowest - unemployment lowest | 28 | 314_unemployment_african_african american_lowest | | 315 | mexico - right mexico - mexican - bordercrisis - slim | 28 | 315_mexico_right mexico_mexican_bordercrisis | | 316 | golf - play golf - exercise - played golf - played | 28 | 316_golf_play golf_exercise_played golf | | 317 | daytona - nascar - 500 - daytona500 - france | 28 | 317_daytona_nascar_500_daytona500 | | 318 | einstein - albert einstein - albert - im smart - smart just | 28 | 318_einstein_albert einstein_albert_im smart | | 319 | impeachment hoax - impeachment - hoax - left dems - radical left dems | 28 | 319_impeachment hoax_impeachment_hoax_left dems | | 320 | todd - chuck todd - sleepy eyes - chuck - eyes | 28 | 320_todd_chuck todd_sleepy eyes_chuck | | 321 | fake news - fake - news - fake news fake news - news fake news | 28 | 321_fake news_fake_news_fake news fake news | | 322 | lady - woman - great lady - great woman - pam bondi | 27 | 322_lady_woman_great lady_great woman | | 323 | ventilators - needed ventilators - hospitals - republic - moreno | 27 | 323_ventilators_needed ventilators_hospitals_republic | | 324 | certificate - birth certificate - birth - born - kenya | 27 | 324_certificate_birth certificate_birth_born | | 325 | ron - desantis - ron desantis - rick scott - rick | 27 | 325_ron_desantis_ron desantis_rick scott | | 326 | mikepence - governor mike - governor mikepence - pence - mike pence | 27 | 326_mikepence_governor mike_governor mikepence_pence | | 327 | isis - abu bakr - newsmaxmedia wandacarruthers - wandacarruthers - bakr | 27 | 327_isis_abu bakr_newsmaxmedia wandacarruthers_wandacarruthers | | 328 | hydroxychloroquine - yale - cavuto - risch - dr | 27 | 328_hydroxychloroquine_yale_cavuto_risch | | 329 | nato - germany - europe - pay - paying | 27 | 329_nato_germany_europe_pay | | 330 | dc - rolling thunder - thunder - washington dc - dc january | 27 | 330_dc_rolling thunder_thunder_washington dc | | 331 | tmobile - johnlegere - service - sucks - customer | 27 | 331_tmobile_johnlegere_service_sucks | | 332 | washington - washington establishment - washington politician - establishment elected - play rules | 27 | 332_washington_washington establishment_washington politician_establishment elected | | 333 | geraldo - realdonaldtrump geraldo - geraldorivera - geraldo rivera - terramarkov16 | 26 | 333_geraldo_realdonaldtrump geraldo_geraldorivera_geraldo rivera | | 334 | mikepence - rt mikepence - president realdonaldtrumps - realdonaldtrumps - mikepence amp realdonaldtrump | 26 | 334_mikepence_rt mikepence_president realdonaldtrumps_realdonaldtrumps | | 335 | ndaa - national defense - pay raise - senate passed - troops | 26 | 335_ndaa_national defense_pay raise_senate passed | | 336 | nice words - cnn thank - thank nice words - words - thank nice | 26 | 336_nice words_cnn thank_thank nice words_words | | 337 | graydon - graydon carter - carter - vanityfair - restaurants | 26 | 337_graydon_graydon carter_carter_vanityfair | | 338 | keith - pennsylvania - lloyd - total endorsement - fred keller | 26 | 338_keith_pennsylvania_lloyd_total endorsement | | 339 | real estate - estate - real - love real estate - tangible | 26 | 339_real estate_estate_real_love real estate | | 340 | economy track - economy - track - economic disaster - president genius | 26 | 340_economy track_economy_track_economic disaster | | 341 | thank iowa - iowa - iacaucus - caucusfortrump - finder httpstcoanvtczqfoq | 26 | 341_thank iowa_iowa_iacaucus_caucusfortrump | | 342 | shirts - ties - shirts ties - ties shirts - macys | 26 | 342_shirts_ties_shirts ties_ties shirts | | 343 | black - black community - african - african americans - community | 26 | 343_black_black community_african_african americans | | 344 | kag2020 - way new - thank north - new mexico - thank | 26 | 344_kag2020_way new_thank north_new mexico | | 345 | witch - witch hunt - hunt - hunt continues - witch hunt continues | 26 | 345_witch_witch hunt_hunt_hunt continues | | 346 | women - womens - coin - national security council - watch ivankatrump | 25 | 346_women_womens_coin_national security council | | 347 | cuban - mark cuban - mark - queynewton - playoffs | 25 | 347_cuban_mark cuban_mark_queynewton | | 348 | toronto - canada - canadian - trumptoronto - canadians | 25 | 348_toronto_canada_canadian_trumptoronto | | 349 | tea party - tea - party - attack tea party - attack tea | 25 | 349_tea party_tea_party_attack tea party | | 350 | dumbest man - brian williams - dumbest - williams - lawrence | 25 | 350_dumbest man_brian williams_dumbest_williams | | 351 | psycho - mika - joe scarborough - scarborough - morningjoe | 25 | 351_psycho_mika_joe scarborough_scarborough | | 352 | lombardi - vince lombardi - vince - quit - youre loser | 25 | 352_lombardi_vince lombardi_vince_quit | | 353 | hillaryclinton - bigleaguetruth - debate bigleaguetruth - debates - debates2016 | 25 | 353_hillaryclinton_bigleaguetruth_debate bigleaguetruth_debates | | 354 | whitehouse - rt whitehouse - just left white - just left white house - join live | 25 | 354_whitehouse_rt whitehouse_just left white_just left white house | | 355 | spied - spied campaign - campaign got - spied campaign got - campaign | 25 | 355_spied_spied campaign_campaign got_spied campaign got | | 356 | cohen - michael cohen - lawyer - michael - campaign finance | 25 | 356_cohen_michael cohen_lawyer_michael | | 357 | libya - embassy - libyan - attack - attack embassy | 25 | 357_libya_embassy_libyan_attack | | 358 | dubai - damac - estates - launch trump - akoya | 25 | 358_dubai_damac_estates_launch trump | | 359 | collar - jobs blue - barackobama - barackobama claims - blue collar | 24 | 359_collar_jobs blue_barackobama_barackobama claims | | 360 | amnesty - executive action - executive - executive amnesty - congress use | 24 | 360_amnesty_executive action_executive_executive amnesty | | 361 | obamagate - realdonaldtrump obamagate - obamagate rt - rt realdonaldtrump obamagate - barack | 24 | 361_obamagate_realdonaldtrump obamagate_obamagate rt_rt realdonaldtrump obamagate | | 362 | emmys - seth - seth meyers - meyers - emmy | 24 | 362_emmys_seth_seth meyers_meyers | | 363 | wine - charlottesville - trumpwinery - highest rated - va | 24 | 363_wine_charlottesville_trumpwinery_highest rated | | 364 | florida - fl - rt teamtrump president realdonaldtrump - teamtrump president realdonaldtrump - phoenix | 24 | 364_florida_fl_rt teamtrump president realdonaldtrump_teamtrump president realdonaldtrump | | 365 | cadillacchamp - cadillac - wgc - championship - cadillac championship | 24 | 365_cadillacchamp_cadillac_wgc_championship | | 366 | mac - miller - song - millers - mac millers | 24 | 366_mac_miller_song_millers | | 367 | rt scavino45 - scavino45 - statedept secpompeo - scavino45 congratulations - rt scavino45 congratulations | 24 | 367_rt scavino45_scavino45_statedept secpompeo_scavino45 congratulations | | 368 | sleepy - sleepy joe - joe - joe hiden - hiden | 24 | 368_sleepy_sleepy joe_joe_joe hiden | | 369 | let make - work hard - let - make america great - hard | 24 | 369_let make_work hard_let_make america great | | 370 | debt - trillion debt - trillion - usa rich - owe | 24 | 370_debt_trillion debt_trillion_usa rich | | 371 | christmas - merry christmas - merry - christmas tree - tree | 23 | 371_christmas_merry christmas_merry_christmas tree | | 372 | henry mcmaster - henry - mcmaster - carolina - governor henry mcmaster | 23 | 372_henry mcmaster_henry_mcmaster_carolina | | 373 | tea - tea party - tea party convention - party convention - sc | 23 | 373_tea_tea party_tea party convention_party convention | | 374 | india - modi - narendramodi - india rt - rt narendramodi | 23 | 374_india_modi_narendramodi_india rt | | 375 | kentucky - mattbevin - governor mattbevin - matt - cameron | 23 | 375_kentucky_mattbevin_governor mattbevin_matt | | 376 | politico - dishonest - reporting - losing - going business | 23 | 376_politico_dishonest_reporting_losing | | 377 | food stamps - stamps - food - food stamp - stamp | 23 | 377_food stamps_stamps_food_food stamp | | 378 | shooting - law enforcement - enforcement - el paso - paso | 23 | 378_shooting_law enforcement_enforcement_el paso | | 379 | ronald reagan - ronald - reagan - president ronald - president ronald reagan | 23 | 379_ronald reagan_ronald_reagan_president ronald | | 380 | california - fires - forest - wildfires - management | 22 | 380_california_fires_forest_wildfires | | 381 | flags - flag - american flag - burn - powmia | 22 | 381_flags_flag_american flag_burn | | 382 | reagan - ronald - ronald reagan - best president - president reagan | 22 | 382_reagan_ronald_ronald reagan_best president | | 383 | jobless claims - jobless - claims - weekly jobless - weekly jobless claims | 22 | 383_jobless claims_jobless_claims_weekly jobless | | 384 | henry ford - ford - henry - think thing - youre right | 22 | 384_henry ford_ford_henry_think thing | | 385 | filibuster - 51 - 60 - need 60 - 51 votes | 22 | 385_filibuster_51_60_need 60 | | 386 | make america safe - america safe - america safe great - safe - safe great | 22 | 386_make america safe_america safe_america safe great_safe | | 387 | barr - general barr - attorney general barr - attorney general - attorney | 22 | 387_barr_general barr_attorney general barr_attorney general | | 388 | leadership - leadership need - incompetent leadership - great need - leadership country | 22 | 388_leadership_leadership need_incompetent leadership_great need | | 389 | trumpvlog - questions - tweet questions - todays trumpvlog - todays | 22 | 389_trumpvlog_questions_tweet questions_todays trumpvlog | | 390 | christmas - merry christmas - merry - christmas happy - christmas christmas | 22 | 390_christmas_merry christmas_merry_christmas happy | | 391 | brennan - john brennan - liar - james clapper - intelligence | 22 | 391_brennan_john brennan_liar_james clapper | | 392 | httpstcogsfsghkmdm - httpstcoznopfstnn3 - httpstcomu6grsamv9 - httpstcogsfsghkmdm httpstcoznopfstnn3 - httpstcogsfsghkmdm httpstcomu6grsamv9 | 22 | 392_httpstcogsfsghkmdm_httpstcoznopfstnn3_httpstcomu6grsamv9_httpstcogsfsghkmdm httpstcoznopfstnn3 | | 393 | federal judge - unconstitutional - judge - gov - ruled | 22 | 393_federal judge_unconstitutional_judge_gov | | 394 | pete rose - hall fame - pete - fame - baseball | 21 | 394_pete rose_hall fame_pete_fame | | 395 | ban - travel ban - travel - need travel - far larger | 21 | 395_ban_travel ban_travel_need travel | | 396 | presidential harassment - harassment - presidential - harassment httpstcog3f6qbnsma - brought charges | 21 | 396_presidential harassment_harassment_presidential_harassment httpstcog3f6qbnsma | | 397 | greatest witch - greatest witch hunt - witch hunt - hunt - witch | 21 | 397_greatest witch_greatest witch hunt_witch hunt_hunt | | 398 | qvc - jewelry - melania - wife melania - est | 21 | 398_qvc_jewelry_melania_wife melania | | 399 | amazon - retailers - post - post office - postal | 21 | 399_amazon_retailers_post_post office | | 400 | law amp order - amp order - law amp - law - order | 21 | 400_law amp order_amp order_law amp_law | | 401 | macys - profiling - racial - racial profiling - boycott | 21 | 401_macys_profiling_racial_racial profiling | | 402 | rand - rand paul - paul - kraftywurker - bye | 21 | 402_rand_rand paul_paul_kraftywurker | | 403 | dbongino - rt dbongino - whowpro - especially rt - men know | 21 | 403_dbongino_rt dbongino_whowpro_especially rt | | 404 | criminal justice - criminal justice reform - justice reform - reform - criminal | 21 | 404_criminal justice_criminal justice reform_justice reform_reform | | 405 | olympic - olympics - gold - coach - winning gold | 21 | 405_olympic_olympics_gold_coach | | 406 | rt erictrump - erictrump - trueamerica1st - rt trueamerica1st - job erictrump | 21 | 406_rt erictrump_erictrump_trueamerica1st_rt trueamerica1st | | 407 | lamestream - lamestream media - media - media totally corrupt - lamestream media totally | 20 | 407_lamestream_lamestream media_media_media totally corrupt | | 408 | flights - stop flights - west africa - flights west - africa | 20 | 408_flights_stop flights_west africa_flights west | | 409 | maralago - palm - palm beach - beach - club | 20 | 409_maralago_palm_palm beach_beach | | 410 | fred trump - fred - macleod - mary - good advice | 20 | 410_fred trump_fred_macleod_mary | | 411 | scam - giant scam - total scam - giant - great scam | 20 | 411_scam_giant scam_total scam_giant | | 412 | infrastructure - roads - train - fast train - trillion | 20 | 412_infrastructure_roads_train_fast train | | 413 | death penalty - penalty - death - trial death - trial death penalty | 20 | 413_death penalty_penalty_death_trial death | | 414 | energy - passion - energy donald trump - energy donald - thequote | 20 | 414_energy_passion_energy donald trump_energy donald | | 415 | promises - promises promises - promises kept - promises promises kept - kept | 20 | 415_promises_promises promises_promises kept_promises promises kept | | 416 | scotland - alexsalmond - aberdeenshire - alexsalmond rt - scotland great | 20 | 416_scotland_alexsalmond_aberdeenshire_alexsalmond rt | | 417 | autism - vaccinations - vaccines - doses - shots | 20 | 417_autism_vaccinations_vaccines_doses | | 418 | collusion obstruction - collusion - obstruction - obstruction collusion - underlying crime | 20 | 418_collusion obstruction_collusion_obstruction_obstruction collusion | | 419 | swamp - draining swamp - draining - drain swamp - drain | 20 | 419_swamp_draining swamp_draining_drain swamp | | 420 | pardon - thanksgiving turkey - pardoned - pardons - thanksgiving | 20 | 420_pardon_thanksgiving turkey_pardoned_pardons | | 421 | nfl - mikeandmike - espngreeny - espn - morning mikeandmike | 20 | 421_nfl_mikeandmike_espngreeny_espn | | 422 | medal freedom - medal - presidential medal - presidential medal freedom - freedom | 20 | 422_medal freedom_medal_presidential medal_presidential medal freedom | | 423 | beat hillary - beat - hillary - beat clinton - breaking records | 20 | 423_beat hillary_beat_hillary_beat clinton | | 424 | thoughts prayers - prayers - thoughts - shooting - officers shot | 20 | 424_thoughts prayers_prayers_thoughts_shooting | | 425 | florida power - disaster declaration - declaration - florida power amp - power amp | 20 | 425_florida power_disaster declaration_declaration_florida power amp | | 426 | wharton - great school - school - grads - wharton school | 19 | 426_wharton_great school_school_grads | | 427 | agschneiderman - lightweight agschneiderman - lightweight - moreland - jcope | 19 | 427_agschneiderman_lightweight agschneiderman_lightweight_moreland | | 428 | whistleblower - schiff - gave false information - gave false - false information | 19 | 428_whistleblower_schiff_gave false information_gave false | | 429 | award - diamond award - boone - pickens - receiving | 19 | 429_award_diamond award_boone_pickens | | 430 | years office - president history - accomplished - treated badly democrats - years president | 19 | 430_years office_president history_accomplished_treated badly democrats | | 431 | evangelicals - evangelical - robertjeffress - religion - evangelical christians | 19 | 431_evangelicals_evangelical_robertjeffress_religion | | 432 | socialist - socialism - socialists - country rt teamtrump - country rt teamtrump president realdonaldtrump | 19 | 432_socialist_socialism_socialists_country rt teamtrump | | 433 | fireworks - salute america - july - 4th - salute | 19 | 433_fireworks_salute america_july_4th | | 434 | nfl - boring - nfl games - games - soft | 19 | 434_nfl_boring_nfl games_games | | 435 | wallace - chris wallace - chris - mike wallace - mike | 19 | 435_wallace_chris wallace_chris_mike wallace | | 436 | sugar - lordsugar - dopey sugar - sugar lordsugar - dopey | 19 | 436_sugar_lordsugar_dopey sugar_sugar lordsugar | | 437 | crookedhillary - bigleaguetruth - world bigleaguetruth debates2016 - fdn - failed world bigleaguetruth | 18 | 437_crookedhillary_bigleaguetruth_world bigleaguetruth debates2016_fdn | | 438 | rt seanhannity - seanhannity - seanhannity breaking - defending realdonaldtrump - enjoy monday | 18 | 438_rt seanhannity_seanhannity_seanhannity breaking_defending realdonaldtrump | | 439 | 230 - section 230 - section - termination - repeal section 230 | 18 | 439_230_section 230_section_termination | | 440 | macys - shop - macys amp - shop macys - houstongunn | 18 | 440_macys_shop_macys amp_shop macys | | 441 | woodward - bob woodward - bob - quotes - book | 18 | 441_woodward_bob woodward_bob_quotes | | 442 | paul ryan - ryan - paul - lame duck - duck | 18 | 442_paul ryan_ryan_paul_lame duck | | 443 | illegally - enter - illegal aliens - country illegally - aliens | 18 | 443_illegally_enter_illegal aliens_country illegally | | 444 | letterman - david letterman - letterman lateshow - lateshow - david | 18 | 444_letterman_david letterman_letterman lateshow_lateshow | | 445 | paris - accord - france - agreement - announcing decision | 18 | 445_paris_accord_france_agreement | | 446 | jimjordan - rt jimjordan - jordan - ranking member - rt mikehahn | 18 | 446_jimjordan_rt jimjordan_jordan_ranking member | | 447 | caravan - caravans - heading southern border - heading southern - large caravans | 18 | 447_caravan_caravans_heading southern border_heading southern | | 448 | patriots - american patriots - great patriots - beautiful evening - evening | 18 | 448_patriots_american patriots_great patriots_beautiful evening | | 449 | roger - roger stone - stone - illegal witch hunt - illegal witch | 18 | 449_roger_roger stone_stone_illegal witch hunt | | 450 | youre fired - fired - words youre fired - words youre - youre | 18 | 450_youre fired_fired_words youre fired_words youre | | 451 | server - dnc server - dnc - fbi - wasserman schultz | 18 | 451_server_dnc server_dnc_fbi | | 452 | boston - killer - obama care - offend - innocent | 18 | 452_boston_killer_obama care_offend | | 453 | rush - limbaugh - rush limbaugh - rushlimbaugh - great rush | 18 | 453_rush_limbaugh_rush limbaugh_rushlimbaugh | | 454 | int - amp making america great - amp making america - amp making - int foxnews | 18 | 454_int_amp making america great_amp making america_amp making | | 455 | troy - troybalderson - ohio - vote troy - great job congressman | 18 | 455_troy_troybalderson_ohio_vote troy | | 456 | tweeting - live tweeting - debate tonight - live - tweeting live | 18 | 456_tweeting_live tweeting_debate tonight_live | | 457 | sexual - rape - sexual assaults - sexual assault - 26000 | 18 | 457_sexual_rape_sexual assaults_sexual assault | | 458 | rink - skating - central park - wollman - central | 17 | 458_rink_skating_central park_wollman | | 459 | vaccine - vaccine development - development - warp - operation warp | 17 | 459_vaccine_vaccine development_development_warp | | 460 | prevail - amp united prevail - strong amp united prevail - httpstcot6ucyapriy - strong amp united | 17 | 460_prevail_amp united prevail_strong amp united prevail_httpstcot6ucyapriy | | 461 | bin - bin laden - laden - seals - bin ladens | 17 | 461_bin_bin laden_laden_seals | | 462 | sheep - counting - contest - bought trump - realdonaldtrump just bought | 17 | 462_sheep_counting_contest_bought trump | | 463 | racist - incompetent rt - blacks - didnt say - tweets | 17 | 463_racist_incompetent rt_blacks_didnt say | | 464 | trump2016 thank - trump2016 - 2016 thanks - iconic - realdonaldtrump trump2016 | 17 | 464_trump2016 thank_trump2016_2016 thanks_iconic | | 465 | nato - nato countries - 130 billion - 130 - pay fair share | 17 | 465_nato_nato countries_130 billion_130 | | 466 | cancer - pls - realdonaldtrump pls - terminal - 500k | 17 | 466_cancer_pls_realdonaldtrump pls_terminal | | 467 | americafirst - thank americafirst - imwithyou - americafirst imwithyou - thank supportamericafirst | 17 | 467_americafirst_thank americafirst_imwithyou_americafirst imwithyou | | 468 | maher - comedian - canned - world trade - praised | 17 | 468_maher_comedian_canned_world trade | | 469 | apologize - apology - danamira realdonaldtrump - danamira - fake controversies whipped perpetually | 17 | 469_apologize_apology_danamira realdonaldtrump_danamira | | 470 | stay tuned - tuned - stay - realdonaldtrump announcing - realdonaldtrump whats | 17 | 470_stay tuned_tuned_stay_realdonaldtrump announcing | | 471 | yossimelman - rt yossimelman - httpstcorljgsc5wlc - comp rt - comp | 17 | 471_yossimelman_rt yossimelman_httpstcorljgsc5wlc_comp rt | | 472 | watergate - obamagate - watergate look - watergate look like - small potatoes | 17 | 472_watergate_obamagate_watergate look_watergate look like | | 473 | suburbs - suburban - low income - suburban women - housing | 17 | 473_suburbs_suburban_low income_suburban women | | 474 | cocaine - heroin - pounds - overdoses - laredo | 17 | 474_cocaine_heroin_pounds_overdoses | | 475 | approved - ambassadors - slow - cabinet - obstruct | 17 | 475_approved_ambassadors_slow_cabinet | | 476 | jewish friends - israel world - jewish - hanukkah - new year | 17 | 476_jewish friends_israel world_jewish_hanukkah | | 477 | home - home favorite - realdonaldtrump home - movie - favorite | 17 | 477_home_home favorite_realdonaldtrump home_movie | | 478 | vanity - vanity fair - vanityfair - circulation - magazine | 16 | 478_vanity_vanity fair_vanityfair_circulation | | 479 | dossier - fake dossier - dossier fbi - fbi - clinton campaign | 16 | 479_dossier_fake dossier_dossier fbi_fbi | | 480 | club growth - club - 1000000 - growth - negative ads | 16 | 480_club growth_club_1000000_growth | | 481 | haters - haters losers - losers - hate - realdonaldtrump hate | 16 | 481_haters_haters losers_losers_hate | | 482 | impeach - committed democrats - republican president - president crime - crime committed | 16 | 482_impeach_committed democrats_republican president_president crime | | 483 | maga - vote maga - debate polls - maga rt - trumptrain | 16 | 483_maga_vote maga_debate polls_maga rt | | 484 | obamabiden administration corrupt - administration corrupt - obamabiden administration - corrupt history - administration corrupt history | 16 | 484_obamabiden administration corrupt_administration corrupt_obamabiden administration_corrupt history | | 485 | park - brutally - innocent - werent - documentary | 16 | 485_park_brutally_innocent_werent | | 486 | swamp - drain - drain swamp - washington swamp - draining swamp | 16 | 486_swamp_drain_drain swamp_washington swamp | | 487 | dennisrodman - dennis - famer - hall famer - celebapprentice | 16 | 487_dennisrodman_dennis_famer_hall famer | | 488 | atlantic city - atlantic - city - timing - years ago great | 16 | 488_atlantic city_atlantic_city_timing | | 489 | debates - hillaryclinton - hillaryclinton white - realdonaldtrump vs - vs hillaryclinton white | 16 | 489_debates_hillaryclinton_hillaryclinton white_realdonaldtrump vs | | 490 | macmiller - song - song donald trump - song donald - macmillers | 16 | 490_macmiller_song_song donald trump_song donald | | 491 | melania send - melania - deepest condolences - deepest - condolences | 16 | 491_melania send_melania_deepest condolences_deepest | | 492 | polls - trump surging - ahead - trump leading - surging | 16 | 492_polls_trump surging_ahead_trump leading | | 493 | media bias - bias - measure - medias - heightened | 16 | 493_media bias_bias_measure_medias | | 494 | apprenticenbc - season premiere - premiere apprenticenbc - premiere - season premiere apprenticenbc | 16 | 494_apprenticenbc_season premiere_premiere apprenticenbc_premiere | | 495 | turnberry - trump turnberry - trumpturnberry - turnberrybuzz - resort world | 16 | 495_turnberry_trump turnberry_trumpturnberry_turnberrybuzz | | 496 | trump remember - donald trump httptinyurlcompqpfvm - trump httptinyurlcompqpfvm - httptinyurlcompqpfvm - game trump | 16 | 496_trump remember_donald trump httptinyurlcompqpfvm_trump httptinyurlcompqpfvm_httptinyurlcompqpfvm | | 497 | pinehurst - mattginellagc - 400 play - pay 400 play - pay 400 | 16 | 497_pinehurst_mattginellagc_400 play_pay 400 play | | 498 | hagel - chuck hagel - chuck - sod - sec defense | 16 | 498_hagel_chuck hagel_chuck_sod | | 499 | happy national - farmers - presidents day - happy - great day | 16 | 499_happy national_farmers_presidents day_happy | | 500 | ed henry - ed - henry - levin - mark levin | 16 | 500_ed henry_ed_henry_levin | | 501 | texas - texas love - men amp women working - amp women working - hard tomorrow | 16 | 501_texas_texas love_men amp women working_amp women working | | 502 | arguments - hours - hours rt - democrats spent - house democrats | 16 | 502_arguments_hours_hours rt_democrats spent | | 503 | answered - questions - video - todays video - facebook | 15 | 503_answered_questions_video_todays video | | 504 | thank west virginia - thank west - kansas - thank tennessee - virginia | 15 | 504_thank west virginia_thank west_kansas_thank tennessee | | 505 | american workers - workers - rt ivankatrump - pledge americas workers - pledge americas | 15 | 505_american workers_workers_rt ivankatrump_pledge americas workers | | 506 | read transcripts - transcripts - read transcript - transcript - read | 15 | 506_read transcripts_transcripts_read transcript_transcript | | 507 | rt realdonaldtrump fake news - rt realdonaldtrump fake - realdonaldtrump fake news - realdonaldtrump fake - fake news | 15 | 507_rt realdonaldtrump fake news_rt realdonaldtrump fake_realdonaldtrump fake news_realdonaldtrump fake | | 508 | radical left democrats - left democrats - radical left - radical - left | 15 | 508_radical left democrats_left democrats_radical left_radical | | 509 | rt jasoninthehouse - jasoninthehouse - jason - did country - gameexpress1 | 15 | 509_rt jasoninthehouse_jasoninthehouse_jason_did country | | 510 | potus realdonaldtrump - rt teamtrump - realdonaldtrump acting - team trumps - live president donald trump | 15 | 510_potus realdonaldtrump_rt teamtrump_realdonaldtrump acting_team trumps | | 511 | tom - tom tiffany - tiffany - people wisconsin - wisconsin | 15 | 511_tom_tom tiffany_tiffany_people wisconsin | | 512 | maxine waters - maxine - waters - michele - bachmann | 15 | 512_maxine waters_maxine_waters_michele | | 513 | eliot - spitzer - eliot spitzer - comptroller - cents | 15 | 513_eliot_spitzer_eliot spitzer_comptroller | | 514 | hispanic - hispanic americans - hispanic heritage - heritage - cincodemayo | 15 | 514_hispanic_hispanic americans_hispanic heritage_heritage | | 515 | negotiation - negotiation art - negotiation art treat - negotiation art treat like - art treat | 15 | 515_negotiation_negotiation art_negotiation art treat_negotiation art treat like | | 516 | prize - nobel - peace - trump nominated - nominated | 15 | 516_prize_nobel_peace_trump nominated | | 517 | video - isis - hillary - lied - hillary clinton lied | 15 | 517_video_isis_hillary_lied | | 518 | drudgereport - rt drudgereport - httptcofokcasbvun - wow great - drudges | 15 | 518_drudgereport_rt drudgereport_httptcofokcasbvun_wow great | | 519 | money big - playing game - excitement - motivation - score | 15 | 519_money big_playing game_excitement_motivation | | 520 | followers - twitter followers - twitter - just passed - passed | 15 | 520_followers_twitter followers_twitter_just passed | | 521 | kamala - kamala harris - harris - socialist - vision country | 15 | 521_kamala_kamala harris_harris_socialist | | 522 | ceiling - debt ceiling - debt - supercommittee - republicans | 15 | 522_ceiling_debt ceiling_debt_supercommittee | | 523 | oil - taking oil - china taking - iraq - china | 15 | 523_oil_taking oil_china taking_iraq | | 524 | south carolina - south carolina makeamericagreatagain trump2016 - carolina makeamericagreatagain trump2016 - south carolina makeamericagreatagain - carolina makeamericagreatagain | 15 | 524_south carolina_south carolina makeamericagreatagain trump2016_carolina makeamericagreatagain trump2016_south carolina makeamericagreatagain | | 525 | negotiating - knows think - want negotiating - make smart - dealmaker | 15 | 525_negotiating_knows think_want negotiating_make smart | | 526 | asktrump - twitternyc - just wrapped - usminority great - marieleff | 15 | 526_asktrump_twitternyc_just wrapped_usminority great | | 527 | worry wont - dont worry wont - httpstcovxrgfrfejm - httpstco5vlnzrg6gn - just didnt httpstco9t50nupkdy | 15 | 527_worry wont_dont worry wont_httpstcovxrgfrfejm_httpstco5vlnzrg6gn | | 528 | thank maga kag2020 - thank maga - maga kag2020 - kag - maga kag | 15 | 528_thank maga kag2020_thank maga_maga kag2020_kag | | 529 | 730 - squawkcnbc - trumptuesday - trumptuesday squawkcnbc - tune tomorrow | 15 | 529_730_squawkcnbc_trumptuesday_trumptuesday squawkcnbc | | 530 | president think - great thank - wish - realdonaldtrump id love - president turn | 15 | 530_president think_great thank_wish_realdonaldtrump id love | | 531 | thank mike - mike - great going - like michael - great spend time | 15 | 531_thank mike_mike_great going_like michael | | 532 | pack - pack court - court - court radical - court radical left | 15 | 532_pack_pack court_court_court radical | | 533 | solution problem - look solution problem - look solution - solution - victorious look solution | 15 | 533_solution problem_look solution problem_look solution_solution | | 534 | administration history - years existence - administration history country - accomplished - administration | 15 | 534_administration history_years existence_administration history country_accomplished | | 535 | easter - happy easter - great easter - happy - great day | 15 | 535_easter_happy easter_great easter_happy | | 536 | pledge - pledge allegiance - allegiance - god pledge allegiance - pledgetoamericasworkers | 14 | 536_pledge_pledge allegiance_allegiance_god pledge allegiance | | 537 | lakers - dwight - howard - houston - shaq | 14 | 537_lakers_dwight_howard_houston | | 538 | mexico - wall - pay wall - mexico pay - mexico pay wall | 14 | 538_mexico_wall_pay wall_mexico pay | | 539 | oseanessytweet - oseanessytweet greggutfeld - greggutfeld - newsmax - newsmax oann | 14 | 539_oseanessytweet_oseanessytweet greggutfeld_greggutfeld_newsmax | | 540 | jacknicklaus - nicklaus - jack nicklaus - grand opening - jack | 14 | 540_jacknicklaus_nicklaus_jack nicklaus_grand opening | | 541 | hardballchris - matthews - chris - chris matthews - completely lost | 14 | 541_hardballchris_matthews_chris_chris matthews | | 542 | wwehof - sammartinobruno - sold crowd - schwarzenegger - fellow | 14 | 542_wwehof_sammartinobruno_sold crowd_schwarzenegger | | 543 | jim - jim great - thank jim - pandering - incredibly stated jim | 14 | 543_jim_jim great_thank jim_pandering | | 544 | happy thanksgiving - thanksgiving - happy - make america great happy - america great happy | 14 | 544_happy thanksgiving_thanksgiving_happy_make america great happy | | 545 | achievers - achievement - beginning - times - forward | 14 | 545_achievers_achievement_beginning_times | | 546 | vietnam - hanoi - hanoi vietnam - philippines - great day meetings | 14 | 546_vietnam_hanoi_hanoi vietnam_philippines | | 547 | stjude - erictrumpfdn - research hospital - childrens research hospital - childrens research | 14 | 547_stjude_erictrumpfdn_research hospital_childrens research hospital | | 548 | neverforget - dday75thanniversary - dday75thanniversary dday75 - dday75 - salutetoamericajuly4th | 14 | 548_neverforget_dday75thanniversary_dday75thanniversary dday75_dday75 | | 549 | apologize - media apologize - collusion delusion - delusion - apologized | 14 | 549_apologize_media apologize_collusion delusion_delusion | | 550 | polls trump - peggynoonannyc - trump winning - polls - national poll | 14 | 550_polls trump_peggynoonannyc_trump winning_polls | | 551 | kemp - brian kemp - brian - georgia - governor | 14 | 551_kemp_brian kemp_brian_georgia | | 552 | vancouver - gonna awesome - trump tower vancouver - realdonaldtrump welcome - trumpvancouver | 14 | 552_vancouver_gonna awesome_trump tower vancouver_realdonaldtrump welcome | | 553 | nafta - canada - terminate - nafta worst trade - nafta worst | 14 | 553_nafta_canada_terminate_nafta worst trade | | 554 | south carolina - south - carolina - won south - votetrumpsc | 14 | 554_south carolina_south_carolina_won south | | 555 | rt pollwatch2020 - pollwatch2020 - democracy institute - institute - trump 47 | 14 | 555_rt pollwatch2020_pollwatch2020_democracy institute_institute | | 556 | trumpadvice - tbt trump - real fan - fan trump - tbt | 14 | 556_trumpadvice_tbt trump_real fan_fan trump | | 557 | great story - article - wonderful article - nice story - nice article | 14 | 557_great story_article_wonderful article_nice story | | 558 | burisma - biden campaign - heat - biden - investigation | 14 | 558_burisma_biden campaign_heat_biden | | 559 | celebapprentice - mvp - impact - brand - celebapprentice flashback | 14 | 559_celebapprentice_mvp_impact_brand | | 560 | radical islam - islam - refugees - syrian - radical | 13 | 560_radical islam_islam_refugees_syrian | | 561 | georgia - kloeffler - perduesenate - runoff - sendavidperdue | 13 | 561_georgia_kloeffler_perduesenate_runoff | | 562 | winners - person reacts new twist fate - new twist fate - winners losers person reacts - winners losers person reacts new | 13 | 562_winners_person reacts new twist fate_new twist fate_winners losers person reacts | | 563 | way soon - soon - way - way kag2020 - come immediately | 13 | 563_way soon_soon_way_way kag2020 | | 564 | arabia - saudi arabia - saudi - pay - paying | 13 | 564_arabia_saudi arabia_saudi_pay | | 565 | dan bishop - bishop - dan - north carolina - vote dan | 13 | 565_dan bishop_bishop_dan_north carolina | | 566 | rt judicialwatch - jw announced - jw - judicialwatch - rt judicialwatch jw | 13 | 566_rt judicialwatch_jw announced_jw_judicialwatch | | 567 | rt danscavino - danscavino - kag2020 - trumppence2020 - danscavino happening | 13 | 567_rt danscavino_danscavino_kag2020_trumppence2020 | | 568 | blumenthal - vietnam - war hero - connecticut - nang | 13 | 568_blumenthal_vietnam_war hero_connecticut | | 569 | soleimani - terrorist - media democrat - really matter - attack terrorist | 13 | 569_soleimani_terrorist_media democrat_really matter | | 570 | armywpfootball - commanderinchiefs - afacademy - commanderinchiefs trophy - armynavygame | 13 | 570_armywpfootball_commanderinchiefs_afacademy_commanderinchiefs trophy | | 571 | chucktodd - chuckwoolery - chucktodd meetthepress - rt chuckwoolery - meetthepress | 13 | 571_chucktodd_chuckwoolery_chucktodd meetthepress_rt chuckwoolery | | 572 | hostage - plane - hostages - great michael - held hostage | 13 | 572_hostage_plane_hostages_great michael | | 573 | corker - bob corker - senator bob - bob - tennessee | 13 | 573_corker_bob corker_senator bob_bob | | 574 | roberts - justice roberts - justice - bushes - obamacare | 13 | 574_roberts_justice roberts_justice_bushes | | 575 | interview yesterday - interview - interview night - discussing ows - foxandfriends interview yesterday | 13 | 575_interview yesterday_interview_interview night_discussing ows | | 576 | mueller - muellers - wrongdoing - rosenstein - mueller report | 13 | 576_mueller_muellers_wrongdoing_rosenstein | | 577 | abc2020 - 10pme - wife melaniatrump - tonight 10pme enjoy - great flotus melania | 13 | 577_abc2020_10pme_wife melaniatrump_tonight 10pme enjoy | | 578 | cities - cities run - run democrats - democrat run - rioters | 13 | 578_cities_cities run_run democrats_democrat run | | 579 | mcuban - dummy mcuban - tee - stupid people - dummy | 13 | 579_mcuban_dummy mcuban_tee_stupid people | | 580 | just returned - returned - whitehouse great - whitehouse great evening - beautiful whitehouse | 13 | 580_just returned_returned_whitehouse great_whitehouse great evening | | 581 | iowa - iowa poll - new cnn poll - new cnn - leading | 13 | 581_iowa_iowa poll_new cnn poll_new cnn | | 582 | penn state - penn - ncaa - hurt great - state university | 13 | 582_penn state_penn_ncaa_hurt great | | 583 | new year - happy new year - happy new - happy healthy - healthy | 13 | 583_new year_happy new year_happy new_happy healthy | | 584 | ad - tv ad - misleading - standards - ruled | 13 | 584_ad_tv ad_misleading_standards | | 585 | black - african - joe biden - joe - community | 13 | 585_black_african_joe biden_joe | | 586 | sean - sean spicer - spicer - dancing stars - dancing | 13 | 586_sean_sean spicer_spicer_dancing stars | | 587 | cher - ugliest - song - people dont care - things republican | 13 | 587_cher_ugliest_song_people dont care | | 588 | new trial - jackson - judge - jury - trial | 13 | 588_new trial_jackson_judge_jury | | 589 | oreillyfactor - oreillyfactor tonight - oreillyfactor tonight 8pm - 8pm - 800 pm enjoy | 13 | 589_oreillyfactor_oreillyfactor tonight_oreillyfactor tonight 8pm_8pm | | 590 | univision - fusion - trade deals - apologized - funny just | 12 | 590_univision_fusion_trade deals_apologized | | 591 | bryant - racist - hbo - dumb - dope | 12 | 591_bryant_racist_hbo_dumb | | 592 | usfda - stevefda - cure - worse problem - cure worse problem | 12 | 592_usfda_stevefda_cure_worse problem | | 593 | pope - pope francis - vatican - francis - catholic | 12 | 593_pope_pope francis_vatican_francis | | 594 | wisconsin - walker - scott walker - justice daniel kelly - justice daniel | 12 | 594_wisconsin_walker_scott walker_justice daniel kelly | | 595 | nh - nhpolitics - new hampshire - hampshire - people nh | 12 | 595_nh_nhpolitics_new hampshire_hampshire | | 596 | press conference today - thank having - having white house - having white - house press conference | 12 | 596_press conference today_thank having_having white house_having white | | 597 | great speech - speech - speech tonight - speech republican - awesome speech | 12 | 597_great speech_speech_speech tonight_speech republican | | 598 | reopen - reopening - mnuchin - safely - prevent | 12 | 598_reopen_reopening_mnuchin_safely | | 599 | declassified - russia collusion narrative - declassified documents - greggjarrett newly - rt greggjarrett newly | 12 | 599_declassified_russia collusion narrative_declassified documents_greggjarrett newly | | 600 | russia russia - journalists - prize - russia - got right | 12 | 600_russia russia_journalists_prize_russia | | 601 | bret - bret michaels - bretbaier - specialreport - michaels | 12 | 601_bret_bret michaels_bretbaier_specialreport | | 602 | sweepstweet - teresagiudice - chi - lisalampanelli - client | 12 | 602_sweepstweet_teresagiudice_chi_lisalampanelli | | 603 | destruction realdonaldtrump - usa going - weneedyou - way save - save | 12 | 603_destruction realdonaldtrump_usa going_weneedyou_way save | | 604 | wig - wear - know dont - know dont like - amp haters | 12 | 604_wig_wear_know dont_know dont like | | 605 | momentum - momentum momentum - momentum momentum lot great ideas - momentum momentum lot - momentum momentum lot great | 12 | 605_momentum_momentum momentum_momentum momentum lot great ideas_momentum momentum lot | | 606 | commercials - phony tv - commercial - tv ads - ads | 12 | 606_commercials_phony tv_commercial_tv ads | | 607 | phones - tapping - tapping phones - tapped - prior election | 12 | 607_phones_tapping_tapping phones_tapped | | 608 | ivankatrump - rt ivankatrump - doralresort - rt ivankatrump great - ivankatrump great | 12 | 608_ivankatrump_rt ivankatrump_doralresort_rt ivankatrump great | | 609 | liberty university - libertyu - jerry falwell - convocation - falwell | 12 | 609_liberty university_libertyu_jerry falwell_convocation | | 610 | voter id - id - identification - voter - card | 12 | 610_voter id_id_identification_voter | | 611 | realdonaldtrump opportunity - thank president - realdonaldtrump invite - force rt - export facility | 11 | 611_realdonaldtrump opportunity_thank president_realdonaldtrump invite_force rt | | 612 | breitbartnews - rt breitbartnews - ouch - ht rt breitbartnews - angel families | 11 | 612_breitbartnews_rt breitbartnews_ouch_ht rt breitbartnews | | 613 | chicago - killings - shootings - chicago police - federal help | 11 | 613_chicago_killings_shootings_chicago police | | 614 | california - vote trump - high crime high - hell vote - hell vote trump | 11 | 614_california_vote trump_high crime high_hell vote | | 615 | trumpscotland - justinrose99 - graemereid1984 trumpgolflinks - realdonaldtrump friend - trumpscotland thank | 11 | 615_trumpscotland_justinrose99_graemereid1984 trumpgolflinks_realdonaldtrump friend | | 616 | jackshallis - donald trump president realdonaldtrump - trump president realdonaldtrump - donald trump president - jackshallis realdonaldtrump | 11 | 616_jackshallis_donald trump president realdonaldtrump_trump president realdonaldtrump_donald trump president | | 617 | bruce - ohr - bruce ohr - nelly - ohrs | 11 | 617_bruce_ohr_bruce ohr_nelly | | 618 | median - median household income - median household - household - household income | 11 | 618_median_median household income_median household_household | | 619 | kentucky - mitch - mitch mcconnell - mcconnell - senate majority leader | 11 | 619_kentucky_mitch_mitch mcconnell_mcconnell | | 620 | deaths - cases - death rate - coronavirus - strongly trending | 11 | 620_deaths_cases_death rate_coronavirus | | 621 | zimmerman - trial - trayvon - george - angel | 11 | 621_zimmerman_trial_trayvon_george | | 622 | meddling - thought crooked hillary - thought crooked - russian meddling - crooked hillary going win | 11 | 622_meddling_thought crooked hillary_thought crooked_russian meddling | | 623 | eddie - navy - gallagher - seal - navy seal | 11 | 623_eddie_navy_gallagher_seal | | 624 | drug - drug costs - drug prices - prescription - prescription drug | 11 | 624_drug_drug costs_drug prices_prescription | | 625 | davos - davos switzerland - switzerland - economic forum - world economic forum | 11 | 625_davos_davos switzerland_switzerland_economic forum | | 626 | veto - national monuments - termination - section 230 - 230 | 11 | 626_veto_national monuments_termination_section 230 | | 627 | stimulus - fault - payments - people desperately - people isnt | 10 | 627_stimulus_fault_payments_people desperately | | 628 | congratulations new - secretary - milley - hr mcmaster - general hr | 10 | 628_congratulations new_secretary_milley_hr mcmaster | | 629 | great interview - interview - ledger - marthamaccallum - lesley stahl 60minutes | 10 | 629_great interview_interview_ledger_marthamaccallum | | 630 | praying - pray - win new york - america loves trump - york thank | 10 | 630_praying_pray_win new york_america loves trump | | 631 | yovanovitch - ambassador - ambassador sondland - sondland - did tell | 10 | 631_yovanovitch_ambassador_ambassador sondland_sondland | | 632 | cruz - bobvanderplaats - total phony - hotel rooms - ted cruz | 10 | 632_cruz_bobvanderplaats_total phony_hotel rooms | | 633 | good night - sleep - mind - today great day - win good | 10 | 633_good night_sleep_mind_today great day | | 634 | vaccine - coronavirus vaccine - nih - johnson - covid19 | 10 | 634_vaccine_coronavirus vaccine_nih_johnson | | 635 | think big - thinking - thinking big - life think - life think big | 10 | 635_think big_thinking_thinking big_life think | | 636 | jimmy fallon - fallon - jimmy - late night jimmy - night jimmy | 10 | 636_jimmy fallon_fallon_jimmy_late night jimmy | | 637 | daytime - weekend daytime - foxnews daytime - weekend - amp newsmax | 10 | 637_daytime_weekend daytime_foxnews daytime_weekend | | 638 | clewandowski - danscavino - rt danscavino - ya know - rt danscavino realdonaldtrump stops | 10 | 638_clewandowski_danscavino_rt danscavino_ya know | | 639 | bought - realdonaldtrump bought - make rich - edincamera2 alphatreblesix - edincamera2 | 10 | 639_bought_realdonaldtrump bought_make rich_edincamera2 alphatreblesix | | 640 | losing battle new way win - battle new - battle new way - battle new way win - battle new way win war | 10 | 640_losing battle new way win_battle new_battle new way_battle new way win | | 641 | coronavirus - paid sick - paid sick leave - sick leave - families coronavirus response act | 10 | 641_coronavirus_paid sick_paid sick leave_sick leave | | 642 | dubai - golf club dubai - club dubai - tigerwoods - course dubai | 10 | 642_dubai_golf club dubai_club dubai_tigerwoods | | 643 | pastor - brunson - andrew brunson - pastor andrew - pastor andrew brunson | 10 | 643_pastor_brunson_andrew brunson_pastor andrew | | 644 | obama better - stay offense - days election - offense - debate | 10 | 644_obama better_stay offense_days election_offense | | 645 | dark knight - dark knight rises - knight rises - knight - dark | 10 | 645_dark knight_dark knight rises_knight rises_knight | </details> ## Training hyperparameters * calculate_probabilities: False * language: english * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.6 * Pandas: 2.0.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.7.0 * Transformers: 4.41.0 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
[ "MEDAL" ]
Non_BioNLP
# general_trump_tweets This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("Thang203/general_trump_tweets") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 647 * Number of training documents: 56571 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | president - realdonaldtrump - rt - trump - vote | 10 | -1_president_realdonaldtrump_rt_trump | | 0 | thanks - good luck - luck - true - billmaher | 20734 | 0_thanks_good luck_luck_true | | 1 | httpstcoz0i7wbsgtp - httpstcoggwnkrgz9u - httpstcoknvqf6jdil - httpstcobc2h4ozhqp - httpstcofduvk8cm9s | 1814 | 1_httpstcoz0i7wbsgtp_httpstcoggwnkrgz9u_httpstcoknvqf6jdil_httpstcobc2h4ozhqp | | 2 | inspiration - thanks - man - role - role model | 1296 | 2_inspiration_thanks_man_role | | 3 | china - chinese - tariffs - chinas - currency | 755 | 3_china_chinese_tariffs_chinas | | 4 | obamacare - healthcare - repeal - premiums - replace | 680 | 4_obamacare_healthcare_repeal_premiums | | 5 | biden - joe biden - joe - bidens - sleepy joe biden | 376 | 5_biden_joe biden_joe_bidens | | 6 | tax - cuts - tax cuts - taxes - tax cut | 345 | 6_tax_cuts_tax cuts_taxes | | 7 | realdonaldtrump foxandfriends - foxandfriends - foxnews - realdonaldtrump foxnews - belllabooo13 | 316 | 7_realdonaldtrump foxandfriends_foxandfriends_foxnews_realdonaldtrump foxnews | | 8 | hillary - crooked hillary - crooked - hillary clinton - clinton | 316 | 8_hillary_crooked hillary_crooked_hillary clinton | | 9 | iran - nuclear - iranian - sanctions - irans | 314 | 9_iran_nuclear_iranian_sanctions | | 10 | hotel - luxury - tower - restaurant - rooms | 298 | 10_hotel_luxury_tower_restaurant | | 11 | golf - course - golf course - golf club - club | 289 | 11_golf_course_golf course_golf club | | 12 | pelosi - nancy - nancy pelosi - crazy nancy - speaker | 277 | 12_pelosi_nancy_nancy pelosi_crazy nancy | | 13 | veterans - heroes - honor - memorial day - memorial | 277 | 13_veterans_heroes_honor_memorial day | | 14 | impeachment - impeach - articles - democrats - articles impeachment | 275 | 14_impeachment_impeach_articles_democrats | | 15 | rt realdonaldtrump - rt - realdonaldtrump - rt realdonaldtrump thank - realdonaldtrump thank | 272 | 15_rt realdonaldtrump_rt_realdonaldtrump_rt realdonaldtrump thank | | 16 | run president - run - realdonaldtrump run - realdonaldtrump run president - donald run | 269 | 16_run president_run_realdonaldtrump run_realdonaldtrump run president | | 17 | hurricane - nhcatlantic - fema - storm - local | 268 | 17_hurricane_nhcatlantic_fema_storm | | 18 | ballots - mailin - ballot - fraud - voting | 268 | 18_ballots_mailin_ballot_fraud | | 19 | portland - anarchists - protesters - peaceful - mayor | 257 | 19_portland_anarchists_protesters_peaceful | | 20 | wind - turbines - wind turbines - alexsalmond - ugly | 244 | 20_wind_turbines_wind turbines_alexsalmond | | 21 | total endorsement - congressman - endorsement - complete total endorsement - complete total | 237 | 21_total endorsement_congressman_endorsement_complete total endorsement | | 22 | korea - north korea - north - kim - kim jong | 230 | 22_korea_north korea_north_kim | | 23 | barackobama - obama - president obama - barackobamas - jimmy carter | 228 | 23_barackobama_obama_president obama_barackobamas | | 24 | interviewed - enjoy interviewed - enjoy - interviewed foxandfriends - interviewed seanhannity | 226 | 24_interviewed_enjoy interviewed_enjoy_interviewed foxandfriends | | 25 | donald trump - donald - newsmaxmedia - cpac - trump speak | 221 | 25_donald trump_donald_newsmaxmedia_cpac | | 26 | realdonaldtrump president - trump president - realdonaldtrump trump - realdonaldtrump trump president - realdonaldtrump president 2016 | 217 | 26_realdonaldtrump president_trump president_realdonaldtrump trump_realdonaldtrump trump president | | 27 | apprenticenbc - apprenticenbc realdonaldtrump - realdonaldtrump apprenticenbc - brandiglanville - brandiglanville apprenticenbc | 209 | 27_apprenticenbc_apprenticenbc realdonaldtrump_realdonaldtrump apprenticenbc_brandiglanville | | 28 | ukraine - ukrainian - quid - pro quo - quid pro quo | 207 | 28_ukraine_ukrainian_quid_pro quo | | 29 | wall - border - built - build - southern border | 204 | 29_wall_border_built_build | | 30 | israel - netanyahu - jerusalem - omar - peace | 203 | 30_israel_netanyahu_jerusalem_omar | | 31 | poll - poll trump - carson - reuters - leads | 201 | 31_poll_poll trump_carson_reuters | | 32 | celebrityapprentice - celebapprentice - realdonaldtrump celebrityapprentice - season celebrityapprentice - season | 200 | 32_celebrityapprentice_celebapprentice_realdonaldtrump celebrityapprentice_season celebrityapprentice | | 33 | apprentice - celebrity apprentice - celebrity - season - realdonaldtrump celebrity | 198 | 33_apprentice_celebrity apprentice_celebrity_season | | 34 | trump2016 - realdonaldtrump trump2016 - trump2016 realdonaldtrump - trump16 - president trump2016 | 195 | 34_trump2016_realdonaldtrump trump2016_trump2016 realdonaldtrump_trump16 | | 35 | scotland - turnberry - golf - course - golf course | 193 | 35_scotland_turnberry_golf_course | | 36 | think like - think like champion - like champion - champion - think big | 187 | 36_think like_think like champion_like champion_champion | | 37 | miss - pageant - miss universe - universe - miss usa | 185 | 37_miss_pageant_miss universe_universe | | 38 | fake news - media - fake - news media - fake news media | 185 | 38_fake news_media_fake_news media | | 39 | chicago - building - sign - tower - tower chicago | 176 | 39_chicago_building_sign_tower | | 40 | economy - gdp - economic - growth - numbers | 173 | 40_economy_gdp_economic_growth | | 41 | nytimes - failing nytimes - failing - new york times - york times | 164 | 41_nytimes_failing nytimes_failing_new york times | | 42 | thank - thank matt - maria - eric - tammy | 159 | 42_thank_thank matt_maria_eric | | 43 | jeb - jeb bush - bush - jebbush - jebs | 158 | 43_jeb_jeb bush_bush_jebbush | | 44 | schiff - adam - adam schiff - shifty - schiffs | 155 | 44_schiff_adam_adam schiff_shifty | | 45 | entrepreneurs - momentum - young entrepreneurs - youre doing - entrepreneurs dont | 155 | 45_entrepreneurs_momentum_young entrepreneurs_youre doing | | 46 | supreme - supreme court - court - judges - justices | 153 | 46_supreme_supreme court_court_judges | | 47 | rt erictrump - erictrump - kimstrassel - rt marklevinshow - marklevinshow | 147 | 47_rt erictrump_erictrump_kimstrassel_rt marklevinshow | | 48 | fed - federal reserve - reserve - rates - inflation | 146 | 48_fed_federal reserve_reserve_rates | | 49 | discussing - interview discussing - interview - squawkcnbc - squawkcnbc interview | 144 | 49_discussing_interview discussing_interview_squawkcnbc | | 50 | cnn - ratings - msnbc - news - wow cnn | 143 | 50_cnn_ratings_msnbc_news | | 51 | hotel - vegas - stayed - las - las vegas | 142 | 51_hotel_vegas_stayed_las | | 52 | thank - thank working - thank working hard - working hard - working hard thank | 139 | 52_thank_thank working_thank working hard_working hard | | 53 | gas - opec - prices - oil - gas prices | 133 | 53_gas_opec_prices_oil | | 54 | focus - passion - goals - success - youre doing | 133 | 54_focus_passion_goals_success | | 55 | interview - great interview - realdonaldtrump great interview - interview realdonaldtrump - realdonaldtrump great | 129 | 55_interview_great interview_realdonaldtrump great interview_interview realdonaldtrump | | 56 | comey - james comey - james - mccabe - fbi | 125 | 56_comey_james comey_james_mccabe | | 57 | flynn - general flynn - michael flynn - general - general michael | 125 | 57_flynn_general flynn_michael flynn_general | | 58 | makeamericagreatagain - realdonaldtrump makeamericagreatagain - trump2016 makeamericagreatagain - keksecorg realdonaldtrump - keksecorg | 124 | 58_makeamericagreatagain_realdonaldtrump makeamericagreatagain_trump2016 makeamericagreatagain_keksecorg realdonaldtrump | | 59 | acting - secretary - pleased - pleased announce - director | 121 | 59_acting_secretary_pleased_pleased announce | | 60 | immigration - border - loopholes - open borders - laws | 119 | 60_immigration_border_loopholes_open borders | | 61 | mueller - mueller report - report - obstruction - collusion | 119 | 61_mueller_mueller report_report_obstruction | | 62 | pennsylvania - thank pennsylvania - thank arizona - arizona - thank | 118 | 62_pennsylvania_thank pennsylvania_thank arizona_arizona | | 63 | senategop - rt senategop - senate - senatemajldr - senjohnbarrasso | 118 | 63_senategop_rt senategop_senate_senatemajldr | | 64 | hispanics - latinos - immigration - illegals - illegal | 115 | 64_hispanics_latinos_immigration_illegals | | 65 | ties - tie - macys - shirts - trump tie | 115 | 65_ties_tie_macys_shirts | | 66 | realdonaldtrump make america great - realdonaldtrump make america - america great - america great rt - realdonaldtrump make | 111 | 66_realdonaldtrump make america great_realdonaldtrump make america_america great_america great rt | | 67 | syria - attack syria - rebels - syrian - assad | 110 | 67_syria_attack syria_rebels_syrian | | 68 | obama - need obama - obamas - realdonaldtrump obama - mess obama | 108 | 68_obama_need obama_obamas_realdonaldtrump obama | | 69 | romney - mitt romney - mitt - stevens - manchin | 106 | 69_romney_mitt romney_mitt_stevens | | 70 | democrats - gopchairwoman - democratic - rt gopchairwoman - democrat | 105 | 70_democrats_gopchairwoman_democratic_rt gopchairwoman | | 71 | debt - budget - barackobama - deficit - national debt | 105 | 71_debt_budget_barackobama_deficit | | 72 | tariffs - trade - barriers - farmers - products | 104 | 72_tariffs_trade_barriers_farmers | | 73 | coronavirus - coronavirus task force - coronavirus task - task force - press briefing | 101 | 73_coronavirus_coronavirus task force_coronavirus task_task force | | 74 | truth - speaks truth - handle truth - speaks - agree | 101 | 74_truth_speaks truth_handle truth_speaks | | 75 | rally - crowd - big rally - big crowd - carolina | 101 | 75_rally_crowd_big rally_big crowd | | 76 | iowa - caucus - des moines - moines - des | 97 | 76_iowa_caucus_des moines_moines | | 77 | schools - school - education - students - children | 96 | 77_schools_school_education_students | | 78 | great make america - great make - america great make - america great make america - make america great make | 96 | 78_great make america_great make_america great make_america great make america | | 79 | thank - thank marklevinshow - thank rep - senatordole - thank loudobbs | 95 | 79_thank_thank marklevinshow_thank rep_senatordole | | 80 | stock market - stock - market - high - alltime high | 95 | 80_stock market_stock_market_high | | 81 | lawsuit - sue - fees - legal fees - trump university | 95 | 81_lawsuit_sue_fees_legal fees | | 82 | rove - karl - karlrove - karl rove - ashley | 92 | 82_rove_karl_karlrove_karl rove | | 83 | polls - poll - suppression - fake - foxnews polls | 90 | 83_polls_poll_suppression_fake | | 84 | tweets - tweet - twitter - realdonaldtrump tweets - realdonaldtrump twitter | 90 | 84_tweets_tweet_twitter_realdonaldtrump tweets | | 85 | charity - fundanything - billmaher - million - away money | 90 | 85_charity_fundanything_billmaher_million | | 86 | usmca - trade deal - agreement - trade - manufacturers | 89 | 86_usmca_trade deal_agreement_trade | | 87 | mayor - nypd - nyc - yorks - new yorks | 89 | 87_mayor_nypd_nyc_yorks | | 88 | new hampshire - hampshire - thank new hampshire - thank new - fitn | 89 | 88_new hampshire_hampshire_thank new hampshire_thank new | | 89 | join - tickets - tomorrow - 3pm - rapids | 88 | 89_join_tickets_tomorrow_3pm | | 90 | joan - joanrivers - rivers - melrivers - apprenticenbc | 87 | 90_joan_joanrivers_rivers_melrivers | | 91 | ebola - africa - west africa - flights - infected | 87 | 91_ebola_africa_west africa_flights | | 92 | collusion - russian - russian collusion - intelligence committee - russia | 86 | 92_collusion_russian_russian collusion_intelligence committee | | 93 | debate - debates - won debate - won - drudge | 85 | 93_debate_debates_won debate_won | | 94 | celebrity apprentice - apprentice - celebrity - episode - pm nbc | 84 | 94_celebrity apprentice_apprentice_celebrity_episode | | 95 | emails - 33000 - deleted - 33000 emails - clinton | 83 | 95_emails_33000_deleted_33000 emails | | 96 | paycheckprotectionprogram - paycheck - paycheck protection - program - protection | 82 | 96_paycheckprotectionprogram_paycheck_paycheck protection_program | | 97 | rubio - marco - marco rubio - lightweight senator - senator marco | 78 | 97_rubio_marco_marco rubio_lightweight senator | | 98 | dannyzuker - danny - zuker - dannyzuker realdonaldtrump - realdonaldtrump dannyzuker | 78 | 98_dannyzuker_danny_zuker_dannyzuker realdonaldtrump | | 99 | nflcommish - buffalobills - bills - buffalo - owner | 77 | 99_nflcommish_buffalobills_bills_buffalo | | 100 | melaniatrump - lady - ivanka - ivankatrump - ivanka trump | 76 | 100_melaniatrump_lady_ivanka_ivankatrump | | 101 | oscars - academy - awards - ellen - actor | 76 | 101_oscars_academy_awards_ellen | | 102 | fisa - fbi - fisa court - horowitz - carter page | 75 | 102_fisa_fbi_fisa court_horowitz | | 103 | mini - mini mike - bloomberg - mike - mini mike bloomberg | 75 | 103_mini_mini mike_bloomberg_mike | | 104 | twitter - tech - facebook - conservatives - big tech | 75 | 104_twitter_tech_facebook_conservatives | | 105 | unemployment - unemployment rate - rate - real unemployment - labor | 75 | 105_unemployment_unemployment rate_rate_real unemployment | | 106 | rt whitehouse live president realdonaldtrump - whitehouse live president realdonaldtrump - live president realdonaldtrump - whitehouse live president - rt whitehouse live president | 75 | 106_rt whitehouse live president realdonaldtrump_whitehouse live president realdonaldtrump_live president realdonaldtrump_whitehouse live president | | 107 | snl - nbcsnl - saturday night live - night live - saturday night | 75 | 107_snl_nbcsnl_saturday night live_night live | | 108 | whistleblower - whistleblowers - fake whistleblower - whistleblower disappeared - second hand | 73 | 108_whistleblower_whistleblowers_fake whistleblower_whistleblower disappeared | | 109 | maga - soon maga - pennsylvania maga - pennsylvania - tonight maga | 73 | 109_maga_soon maga_pennsylvania maga_pennsylvania | | 110 | great honor - honor - honor thank - great honor thank - great people | 73 | 110_great honor_honor_honor thank_great honor thank | | 111 | police - defund - defund police - want defund - police departments | 73 | 111_police_defund_defund police_want defund | | 112 | rally - trump rally - las vegas - las - overflow | 73 | 112_rally_trump rally_las vegas_las | | 113 | wwe - wrestlemania - wwe hall - hall fame - fame | 73 | 113_wwe_wrestlemania_wwe hall_hall fame | | 114 | bus - funds - improvements - funding - area | 72 | 114_bus_funds_improvements_funding | | 115 | maga - thank maga - great news maga - news maga - great news | 72 | 115_maga_thank maga_great news maga_news maga | | 116 | ivanka - fathers day - father - daughter ivanka - fathers | 71 | 116_ivanka_fathers day_father_daughter ivanka | | 117 | makeamericagreatagain trump2016 - makeamericagreatagain - trump2016 - trump2016 makeamericagreatagain - trump trump2016 | 71 | 117_makeamericagreatagain trump2016_makeamericagreatagain_trump2016_trump2016 makeamericagreatagain | | 118 | approval rating - rating republican party - rating republican - approval rating republican - approval rating republican party | 70 | 118_approval rating_rating republican party_rating republican_approval rating republican | | 119 | america great - make america great - make america - making america great - making america | 70 | 119_america great_make america great_make america_making america great | | 120 | wisconsin - michigan - thank michigan - foxconn - wisconsin vote | 69 | 120_wisconsin_michigan_thank michigan_foxconn | | 121 | ohio - thank ohio - indiana - cincinnati - dayton | 69 | 121_ohio_thank ohio_indiana_cincinnati | | 122 | cuomo - governor cuomo - andrew cuomo - new york - york | 69 | 122_cuomo_governor cuomo_andrew cuomo_new york | | 123 | france - g7 - emmanuelmacron - president emmanuelmacron - angela merkel | 69 | 123_france_g7_emmanuelmacron_president emmanuelmacron | | 124 | rt danscavino - danscavino - rt - nadler - ocean city maryland | 69 | 124_rt danscavino_danscavino_rt_nadler | | 125 | isis - caliphate - isis fighters - fighters - prisoners | 69 | 125_isis_caliphate_isis fighters_fighters | | 126 | carnegie - waldo emerson - waldo - emerson - edison | 68 | 126_carnegie_waldo emerson_waldo_emerson | | 127 | congresswoman - total endorsement - endorsement - complete - complete total endorsement | 68 | 127_congresswoman_total endorsement_endorsement_complete | | 128 | birthday - happy birthday - happy - birthday great - happy birthday great | 67 | 128_birthday_happy birthday_happy_birthday great | | 129 | thank trump2016 - trump2016 - thank america - thank america trump2016 - america trump2016 | 66 | 129_thank trump2016_trump2016_thank america_thank america trump2016 | | 130 | daca - make deal - march 5th - deal - recipients | 66 | 130_daca_make deal_march 5th_deal | | 131 | yankees - derek - jeter - derek jeter - baseball | 66 | 131_yankees_derek_jeter_derek jeter | | 132 | art deal - art - deal - books - book | 66 | 132_art deal_art_deal_books | | 133 | japan - abe - prime minister - prime - minister | 66 | 133_japan_abe_prime minister_prime | | 134 | prayers - thoughts prayers - condolences - families - thoughts | 65 | 134_prayers_thoughts prayers_condolences_families | | 135 | rosie - rosie odonnell - odonnell - view - theviewtv | 65 | 135_rosie_rosie odonnell_odonnell_view | | 136 | happy birthday - birthday - happy - ritter1025 - wishing | 65 | 136_happy birthday_birthday_happy_ritter1025 | | 137 | books - book - reading - thanks good luck - read | 65 | 137_books_book_reading_thanks good luck | | 138 | million jobs - added - jobs - jobs created - jobs added | 65 | 138_million jobs_added_jobs_jobs created | | 139 | gun - nra - guns - rt nra - gun control | 63 | 139_gun_nra_guns_rt nra | | 140 | italy - prime minister - prime - minister - honor welcome | 63 | 140_italy_prime minister_prime_minister | | 141 | putin - russia - vladimir - vladimir putin - tougher russia | 63 | 141_putin_russia_vladimir_vladimir putin | | 142 | weiner - pervert - anthony weiner - anthony - sexting | 63 | 142_weiner_pervert_anthony weiner_anthony | | 143 | iowa - iowa great - crowd - crowds - iowa today | 63 | 143_iowa_iowa great_crowd_crowds | | 144 | georgia - briankempga - verification - signature - signatures | 60 | 144_georgia_briankempga_verification_signature | | 145 | makeamericagreatagain - gopdebate - thank makeamericagreatagain - deplorables - twitter account | 60 | 145_makeamericagreatagain_gopdebate_thank makeamericagreatagain_deplorables | | 146 | cruz - ted cruz - ted - goldman - dirty tricks | 60 | 146_cruz_ted cruz_ted_goldman | | 147 | speech - summit - thefamilyleader - citizensunited - reviews | 60 | 147_speech_summit_thefamilyleader_citizensunited | | 148 | midas touch - midas - touch - brand - entrepreneurs | 60 | 148_midas touch_midas_touch_brand | | 149 | foxandfriends monday - 730 - foxandfriends - 700 - foxandfriends 700 | 59 | 149_foxandfriends monday_730_foxandfriends_700 | | 150 | loved - best tv - tv - realdonaldtrump loved - realdonaldtrump meetthepress | 59 | 150_loved_best tv_tv_realdonaldtrump loved | | 151 | plane - boeing - airports - batteries - f35 | 59 | 151_plane_boeing_airports_batteries | | 152 | omarosa - celebapprentice - latoyajackson - omarosas - merger | 59 | 152_omarosa_celebapprentice_latoyajackson_omarosas | | 153 | ford - plant - general motors - motors - car | 58 | 153_ford_plant_general motors_motors | | 154 | college - applications - records - college records - barackobamas | 58 | 154_college_applications_records_college records | | 155 | letterman - david letterman - david - did awesome - realdonaldtrump did awesome | 58 | 155_letterman_david letterman_david_did awesome | | 156 | mexico - southern border - border - southern - honduras | 58 | 156_mexico_southern border_border_southern | | 157 | post office - old post - old post office - post - old | 58 | 157_post office_old post_old post office_post | | 158 | medical - health - supplies - workers - nurses | 57 | 158_medical_health_supplies_workers | | 159 | flotus - rt flotus - realdonaldtrump flotus - president realdonaldtrump flotus - flotus thank | 57 | 159_flotus_rt flotus_realdonaldtrump flotus_president realdonaldtrump flotus | | 160 | piersmorgan - piers - rt piersmorgan - piers morgan - morgan | 57 | 160_piersmorgan_piers_rt piersmorgan_piers morgan | | 161 | mccain - john mccain - john - fan john - kpdelbridge | 56 | 161_mccain_john mccain_john_fan john | | 162 | churchill - winston churchill - winston - courage - abraham | 56 | 162_churchill_winston churchill_winston_courage | | 163 | mikeandmike - frankcaliendo - mikeandmike realdonaldtrump - frankcaliendo mikeandmike - frankcaliendo mikeandmike realdonaldtrump | 56 | 163_mikeandmike_frankcaliendo_mikeandmike realdonaldtrump_frankcaliendo mikeandmike | | 164 | fraud - voter fraud - voter - election fraud - election | 55 | 164_fraud_voter fraud_voter_election fraud | | 165 | barackobama - fundraisers - vacation - habitual - dime | 55 | 165_barackobama_fundraisers_vacation_habitual | | 166 | brady - tom brady - tom - quarterback - bob kraft | 55 | 166_brady_tom brady_tom_quarterback | | 167 | celebapprentice - thegarybusey - boardroom - decision celebapprentice - celebapprentice thegarybusey | 55 | 167_celebapprentice_thegarybusey_boardroom_decision celebapprentice | | 168 | afghanistan - taliban - afghan - karzai - soldiers | 55 | 168_afghanistan_taliban_afghan_karzai | | 169 | egypt - brotherhood - muslim brotherhood - muslim - mubarak | 55 | 169_egypt_brotherhood_muslim brotherhood_muslim | | 170 | arod - yankees - contract - mlb - drugs | 55 | 170_arod_yankees_contract_mlb | | 171 | approval - approval rating - rating - approval rating republican - approval rating republican party | 55 | 171_approval_approval rating_rating_approval rating republican | | 172 | birthday - happy birthday - happy - realdonaldtrump happy - bday | 55 | 172_birthday_happy birthday_happy_realdonaldtrump happy | | 173 | great book - book - new book - book great - httpstcofg7yxock9r | 54 | 173_great book_book_new book_book great | | 174 | prayer - day prayer - franklingraham - national day - national day prayer | 54 | 174_prayer_day prayer_franklingraham_national day | | 175 | president vote - vote realdonaldtrump - vote - realdonaldtrump run president - realdonaldtrump run | 54 | 175_president vote_vote realdonaldtrump_vote_realdonaldtrump run president | | 176 | signing - crippled - crippled america - copies - book signing | 54 | 176_signing_crippled_crippled america_copies | | 177 | ufc - serenawilliams - boxing - floyd - fight | 53 | 177_ufc_serenawilliams_boxing_floyd | | 178 | puerto - puerto rico - rico - relief - grid | 53 | 178_puerto_puerto rico_rico_relief | | 179 | louisiana - eddierispone - edwards - bel edwards - john bel | 53 | 179_louisiana_eddierispone_edwards_bel edwards | | 180 | seanhannity - hannity - hannityshow - levin - tonight | 52 | 180_seanhannity_hannity_hannityshow_levin | | 181 | true - true thanks - httpstcoxvsyuvh1bn - true work httpstcouei4u4lpts - true number | 52 | 181_true_true thanks_httpstcoxvsyuvh1bn_true work httpstcouei4u4lpts | | 182 | virus - china virus - china - chinese virus - coronavirus | 52 | 182_virus_china virus_china_chinese virus | | 183 | new book - book - great new book - great new - copy today | 52 | 183_new book_book_great new book_great new | | 184 | kavanaugh - brett kavanaugh - brett - judge brett - judge brett kavanaugh | 51 | 184_kavanaugh_brett kavanaugh_brett_judge brett | | 185 | nasa - space - spacex - astronauts - launch | 51 | 185_nasa_space_spacex_astronauts | | 186 | amazing - watch httpstcopyoiljm0pz - httpstcopyoiljm0pz - great watch httpstcopyoiljm0pz - great news thank | 50 | 186_amazing_watch httpstcopyoiljm0pz_httpstcopyoiljm0pz_great watch httpstcopyoiljm0pz | | 187 | march 3rd - march - 3rd - celebapprentice - traceadkins | 50 | 187_march 3rd_march_3rd_celebapprentice | | 188 | greta - gretawire - greta van - 10 pm - gretawire tonight | 50 | 188_greta_gretawire_greta van_10 pm | | 189 | dow - nasdaq - sampp - sampp 500 - 500 | 49 | 189_dow_nasdaq_sampp_sampp 500 | | 190 | kasich - john kasich - ohio - john - negative ads | 49 | 190_kasich_john kasich_ohio_john | | 191 | mittromney - mitt - aggressive - know mittromney - mittromney just | 49 | 191_mittromney_mitt_aggressive_know mittromney | | 192 | venezuela - cuba - democracy - cuban - chavez | 49 | 192_venezuela_cuba_democracy_cuban | | 193 | went bankrupt - bankrupt - bankruptcy - went - buffett | 48 | 193_went bankrupt_bankrupt_bankruptcy_went | | 194 | estate - real estate - time buy - buy - buyers | 48 | 194_estate_real estate_time buy_buy | | 195 | ireland - doonbeg - trumpireland - atlantic ocean - links | 48 | 195_ireland_doonbeg_trumpireland_atlantic ocean | | 196 | dems - bad dems - dems change - american legion - legion | 47 | 196_dems_bad dems_dems change_american legion | | 197 | tomfitton - rt tomfitton - coup - rt tomfitton coup - tomfitton coup | 47 | 197_tomfitton_rt tomfitton_coup_rt tomfitton coup | | 198 | turkey - kurds - syria - ceasefire - idlib | 47 | 198_turkey_kurds_syria_ceasefire | | 199 | iraq - oil - oil iraq - trillion - waste lives | 47 | 199_iraq_oil_oil iraq_trillion | | 200 | website - obamacare website - obamacare - 1b - 1b obamacare | 47 | 200_website_obamacare website_obamacare_1b | | 201 | stewart - jon stewart - jon - thedailyshow - overrated | 47 | 201_stewart_jon stewart_jon_thedailyshow | | 202 | trump signature - signature collection - trump signature collection - collection - available macys | 47 | 202_trump signature_signature collection_trump signature collection_collection | | 203 | warming - global warming - global - freezing - coldest | 47 | 203_warming_global warming_global_freezing | | 204 | run 2016 - 2016 - president 2016 - run president 2016 - running 2016 | 46 | 204_run 2016_2016_president 2016_run president 2016 | | 205 | selffunding - lobbyists - special interests - interests - self funding | 46 | 205_selffunding_lobbyists_special interests_interests | | 206 | economy - economy history - greatest economy - hourly - wages | 46 | 206_economy_economy history_greatest economy_hourly | | 207 | covid19 - rt cdcgov - cdcgov - spread covid19 - spread | 46 | 207_covid19_rt cdcgov_cdcgov_spread covid19 | | 208 | fox amp friends - fox amp - amp friends - doing fox - friends | 46 | 208_fox amp friends_fox amp_amp friends_doing fox | | 209 | fracking - ban - energy - pennsylvania - pennsylvania vote | 46 | 209_fracking_ban_energy_pennsylvania | | 210 | wow - agree - agree 100 - fully agree - wow unbelievable | 45 | 210_wow_agree_agree 100_fully agree | | 211 | veterans - marine - vet - veterans affairs - affairs | 45 | 211_veterans_marine_vet_veterans affairs | | 212 | sternshow - howardstern - howard - stern - howard stern | 45 | 212_sternshow_howardstern_howard_stern | | 213 | pence - mike pence - mikepence - rt mikepence - mike | 45 | 213_pence_mike pence_mikepence_rt mikepence | | 214 | elizabeth - elizabeth warren - warren - pocahontas - goofy | 45 | 214_elizabeth_elizabeth warren_warren_pocahontas | | 215 | bernie - bernie sanders - sanders - crazy bernie - crazy bernie sanders | 45 | 215_bernie_bernie sanders_sanders_crazy bernie | | 216 | barrett - amy - judge - judge barrett - supreme | 44 | 216_barrett_amy_judge_judge barrett | | 217 | schneiderman - eric schneiderman - ag eric schneiderman - ag eric - eric | 44 | 217_schneiderman_eric schneiderman_ag eric schneiderman_ag eric | | 218 | charity - 5m - offer - 5m charity - million charity | 44 | 218_charity_5m_offer_5m charity | | 219 | baltimore - elijah - cummings - elijah cummings - ferguson | 43 | 219_baltimore_elijah_cummings_elijah cummings | | 220 | sign - love sign - sign looks - looks great - sign looks great | 43 | 220_sign_love sign_sign looks_looks great | | 221 | doral - national doral - trump national doral - trump national - trump doral | 42 | 221_doral_national doral_trump national doral_trump national | | 222 | hunter - hunter biden - biden - wheres hunter - joe | 42 | 222_hunter_hunter biden_biden_wheres hunter | | 223 | keystone - pipeline - xl - keystone xl - keystone pipeline | 42 | 223_keystone_pipeline_xl_keystone xl | | 224 | art deal - art - worst thing possibly deal desperate - possibly deal - possibly deal desperate | 42 | 224_art deal_art_worst thing possibly deal desperate_possibly deal | | 225 | russia - russian - russia hoax - russia russia - hoax | 41 | 225_russia_russian_russia hoax_russia russia | | 226 | witnesses - testimony - witness - witnesses house - chose | 41 | 226_witnesses_testimony_witness_witnesses house | | 227 | news conference - conference - white house news conference - house news conference - white house news | 41 | 227_news conference_conference_white house news conference_house news conference | | 228 | spying - spy - intelligence - surveillance - operation | 41 | 228_spying_spy_intelligence_surveillance | | 229 | football - world cup - cup - national champions - tigers | 41 | 229_football_world cup_cup_national champions | | 230 | makeamericagreatagain - thank support - makeamericagreatagain trump2016 - thank america - trump2016makeamericagreatagain | 40 | 230_makeamericagreatagain_thank support_makeamericagreatagain trump2016_thank america | | 231 | schumer - chuck schumer - chuck - cryin - cryin chuck | 40 | 231_schumer_chuck schumer_chuck_cryin | | 232 | party - party american - party party - republican party party - republican party | 40 | 232_party_party american_party party_republican party party | | 233 | register - register vote - early voting - day register - day register vote | 39 | 233_register_register vote_early voting_day register | | 234 | testing - cases - tests - million tests - country world | 39 | 234_testing_cases_tests_million tests | | 235 | benghazi - consulate - attack - benghazi terrorist - attack benghazi | 39 | 235_benghazi_consulate_attack_benghazi terrorist | | 236 | farmers - secretarysonny - rt secretarysonny - ranchers - supply | 39 | 236_farmers_secretarysonny_rt secretarysonny_ranchers | | 237 | rebuilding - years rebuilding nations finally rebuilding - nations finally rebuilding nation - rebuilding nations finally rebuilding nation - finally rebuilding nation | 39 | 237_rebuilding_years rebuilding nations finally rebuilding_nations finally rebuilding nation_rebuilding nations finally rebuilding nation | | 238 | snowden - traitor - snowden traitor - edward snowden - edward | 38 | 238_snowden_traitor_snowden traitor_edward snowden | | 239 | drug - drug prices - prices - drug companies - pharma | 38 | 239_drug_drug prices_prices_drug companies | | 240 | rigged election - rigged - election - election rigged - election rigged election | 38 | 240_rigged election_rigged_election_election rigged | | 241 | tigerwoods - tiger - dustin - majors - themasters | 38 | 241_tigerwoods_tiger_dustin_majors | | 242 | gopchairwoman - rt gopchairwoman - gopchairwoman realdonaldtrump - rt gopchairwoman realdonaldtrump - gopchairwoman mainstream media | 37 | 242_gopchairwoman_rt gopchairwoman_gopchairwoman realdonaldtrump_rt gopchairwoman realdonaldtrump | | 243 | volunteer trump - volunteer trump election - trump election - poll watcher sign - election poll watcher | 37 | 243_volunteer trump_volunteer trump election_trump election_poll watcher sign | | 244 | energy - american energy - coal - electricity - energy industry | 37 | 244_energy_american energy_coal_electricity | | 245 | luther - alabama - strange - roy - roy moore | 37 | 245_luther_alabama_strange_roy | | 246 | kenya - kenyamoore - leezagibbons - ianziering - kenyamoore realdonaldtrump | 36 | 246_kenya_kenyamoore_leezagibbons_ianziering | | 247 | dbongino - firefighters - fireman - union - dues | 36 | 247_dbongino_firefighters_fireman_union | | 248 | small - small businesses - businesses - small business - business | 36 | 248_small_small businesses_businesses_small business | | 249 | anthem - players - nfl - national anthem - disrespect | 36 | 249_anthem_players_nfl_national anthem | | 250 | florida - tampa - tampa florida - thank florida - florida thank | 36 | 250_florida_tampa_tampa florida_thank florida | | 251 | businessman - businessman run - business man - business - run country | 36 | 251_businessman_businessman run_business man_business | | 252 | rasmussen - approval - rasmussenpoll - rasmussen poll - approval rating | 36 | 252_rasmussen_approval_rasmussenpoll_rasmussen poll | | 253 | magazine - time magazine - newsweek - york magazine - new york magazine | 35 | 253_magazine_time magazine_newsweek_york magazine | | 254 | sanctuary - sanctuary cities - cities - californias - criminals | 35 | 254_sanctuary_sanctuary cities_cities_californias | | 255 | crisis - border - security crisis - border crisis - southern border | 35 | 255_crisis_border_security crisis_border crisis | | 256 | tzu - sun - macarthur - patton - douglas macarthur | 34 | 256_tzu_sun_macarthur_patton | | 257 | apple - screen - iphone - larger - samsung | 34 | 257_apple_screen_iphone_larger | | 258 | theory - critical - race - federal agencies - antiamerican | 34 | 258_theory_critical_race_federal agencies | | 259 | strzok - peter - peter strzok - lisa page - lover | 34 | 259_strzok_peter_peter strzok_lisa page | | 260 | flotus melania - melania - flotus - melania honored - flotus melania honored | 34 | 260_flotus melania_melania_flotus_melania honored | | 261 | jobs jobs - jobs jobs jobs - jobs - jobsnotmobs - jobs maga | 34 | 261_jobs jobs_jobs jobs jobs_jobs_jobsnotmobs | | 262 | 13th season - 13th - 13th season star - season star - 13th season star celebapprentice | 33 | 262_13th season_13th_13th season star_season star | | 263 | maga - rt realdonaldtrump thank - thank maga - realdonaldtrump thank - realdonaldtrump maga | 33 | 263_maga_rt realdonaldtrump thank_thank maga_realdonaldtrump thank | | 264 | antifa - antifa scum - ran hills - attacking people - rt mrandyngo | 33 | 264_antifa_antifa scum_ran hills_attacking people | | 265 | global warming - warming - global - climate - climate change | 33 | 265_global warming_warming_global_climate | | 266 | shutdown - shut government - shutdowns - border security - democrats open | 33 | 266_shutdown_shut government_shutdowns_border security | | 267 | negotiation - know exactly - persuasion power - think wants - view conflict | 33 | 267_negotiation_know exactly_persuasion power_think wants | | 268 | votes - biden - 75000000 votes - biden got - trump votes | 33 | 268_votes_biden_75000000 votes_biden got | | 269 | demdebate - realdonaldtrump demdebate - boring - debate - poverty | 33 | 269_demdebate_realdonaldtrump demdebate_boring_debate | | 270 | course - ferry point - ferry - realdonaldtrump trumpdoral - realdonaldtrump course | 33 | 270_course_ferry point_ferry_realdonaldtrump trumpdoral | | 271 | megynkelly - crazy megynkelly - overrated - goldberg - ratings | 32 | 271_megynkelly_crazy megynkelly_overrated_goldberg | | 272 | bolton - john bolton - john - dumbest people - boltons | 32 | 272_bolton_john bolton_john_dumbest people | | 273 | consumer - consumer confidence - confidence - highest level - highest | 32 | 273_consumer_consumer confidence_confidence_highest level | | 274 | lesm - law enforcement officers - enforcement officers - law enforcement - enforcement | 32 | 274_lesm_law enforcement officers_enforcement officers_law enforcement | | 275 | carolina - north carolina - north - thank north carolina - thank north | 32 | 275_carolina_north carolina_north_thank north carolina | | 276 | cabinet - meeting - great meeting - cabinet meeting - ceos | 31 | 276_cabinet_meeting_great meeting_cabinet meeting | | 277 | swine - swine flu - flu - h1n1 - h1n1 swine flu | 31 | 277_swine_swine flu_flu_h1n1 | | 278 | opioid - opioid crisis - crisis - prescription - epidemic | 31 | 278_opioid_opioid crisis_crisis_prescription | | 279 | pastor - christian - christian pastor - saeed - abedini | 31 | 279_pastor_christian_christian pastor_saeed | | 280 | lance - armstrong - lancearmstrong - oprah - sued | 31 | 280_lance_armstrong_lancearmstrong_oprah | | 281 | witch hunt - hunt - witch - collusion - russian witch | 31 | 281_witch hunt_hunt_witch_collusion | | 282 | amendment - 2nd amendment - 2nd - save 2nd - save 2nd amendment | 31 | 282_amendment_2nd amendment_2nd_save 2nd | | 283 | join - wichita - 7pme - california tomorrow - morning looking forward | 31 | 283_join_wichita_7pme_california tomorrow | | 284 | rt whitehouse - whitehouse - america wicked - protected - wicked | 31 | 284_rt whitehouse_whitehouse_america wicked_protected | | 285 | weeklyaddress - usstatevisit - memorialday - unga - usa japan httpstcoevxfqavnfs | 31 | 285_weeklyaddress_usstatevisit_memorialday_unga | | 286 | ms13 - ms13 gang - gang members - gang - ice | 31 | 286_ms13_ms13 gang_gang members_gang | | 287 | disgraceful - truth - really sad - disgrace - httpstcozb5xpdks4b | 31 | 287_disgraceful_truth_really sad_disgrace | | 288 | great job fox - job fox - thank foxandfriends - great job - thank foxandfriends great | 31 | 288_great job fox_job fox_thank foxandfriends_great job | | 289 | steel - aluminum - steel industry - industry - mac pro | 30 | 289_steel_aluminum_steel industry_industry | | 290 | think big donald trump - think big donald - big donald trump - big donald - going thinking think big donald | 30 | 290_think big donald trump_think big donald_big donald trump_big donald | | 291 | huffingtonpost - aol - ariannahuff - huffington - arianna | 30 | 291_huffingtonpost_aol_ariannahuff_huffington | | 292 | schiff - adam - adam schiff - impeachment - schiffs | 30 | 292_schiff_adam_adam schiff_impeachment | | 293 | tahmooressi - usmc - sgt - marine - sgt tahmooressi | 30 | 293_tahmooressi_usmc_sgt_marine | | 294 | vaccines - vaccine - therapeutics - usfda - distribution | 30 | 294_vaccines_vaccine_therapeutics_usfda | | 295 | autism - awareness - liub - research - wright | 30 | 295_autism_awareness_liub_research | | 296 | rt realdonaldtrump law - realdonaldtrump law - law amp order rt - amp order rt - order rt | 30 | 296_rt realdonaldtrump law_realdonaldtrump law_law amp order rt_amp order rt | | 297 | hair - drchimrickles - als - wig - realdonaldtrump remember | 30 | 297_hair_drchimrickles_als_wig | | 298 | convention night - watch republican - republican national convention - rt teamtrump watch - teamtrump watch | 30 | 298_convention night_watch republican_republican national convention_rt teamtrump watch | | 299 | steele - christopher steele - dossier - christopher - steeles | 30 | 299_steele_christopher steele_dossier_christopher | | 300 | virginia - gillespie - state virginia - ed - great state virginia | 29 | 300_virginia_gillespie_state virginia_ed | | 301 | rt realdonaldtrump interviewed - realdonaldtrump interviewed - interviewed - enjoy rt - enjoy rt realdonaldtrump | 29 | 301_rt realdonaldtrump interviewed_realdonaldtrump interviewed_interviewed_enjoy rt | | 302 | mandela - nelson mandela - nelson - africa - south africa | 29 | 302_mandela_nelson mandela_nelson_africa | | 303 | god bless usa - bless usa - america america - usa usa - bless | 29 | 303_god bless usa_bless usa_america america_usa usa | | 304 | christmas - merry - merry christmas - christmas realdonaldtrump - want christmas | 29 | 304_christmas_merry_merry christmas_christmas realdonaldtrump | | 305 | paris - france - germany - terror - cemetery | 29 | 305_paris_france_germany_terror | | 306 | radical islamic - islamic - terrorism - radical islamic terrorism - islamic terrorism | 29 | 306_radical islamic_islamic_terrorism_radical islamic terrorism | | 307 | realdonaldtrump vote - vote - vote realdonaldtrump - vote donald - igotangeleyes realdonaldtrump | 29 | 307_realdonaldtrump vote_vote_vote realdonaldtrump_vote donald | | 308 | blue monster - monster - blue - monster trump national doral - blue monster trump national doral | 29 | 308_blue monster_monster_blue_monster trump national doral | | 309 | rt mariabartiromo - mariabartiromo - sundayfutures - morningsmaria foxbusiness - morningsmaria | 29 | 309_rt mariabartiromo_mariabartiromo_sundayfutures_morningsmaria foxbusiness | | 310 | bergdahl - taliban - deserter - exchange - afghanistan | 29 | 310_bergdahl_taliban_deserter_exchange | | 311 | foxandfriends minutes - foxandfriends - foxandfriends enjoy - minutes foxandfriends - minutes | 29 | 311_foxandfriends minutes_foxandfriends_foxandfriends enjoy_minutes foxandfriends | | 312 | doral - miami - doral miami - national doral - trump national doral | 28 | 312_doral_miami_doral miami_national doral | | 313 | silent - silent majority - majority - realdonaldtrump silent - stronger | 28 | 313_silent_silent majority_majority_realdonaldtrump silent | | 314 | unemployment - african - african american - lowest - unemployment lowest | 28 | 314_unemployment_african_african american_lowest | | 315 | mexico - right mexico - mexican - bordercrisis - slim | 28 | 315_mexico_right mexico_mexican_bordercrisis | | 316 | golf - play golf - exercise - played golf - played | 28 | 316_golf_play golf_exercise_played golf | | 317 | daytona - nascar - 500 - daytona500 - france | 28 | 317_daytona_nascar_500_daytona500 | | 318 | einstein - albert einstein - albert - im smart - smart just | 28 | 318_einstein_albert einstein_albert_im smart | | 319 | impeachment hoax - impeachment - hoax - left dems - radical left dems | 28 | 319_impeachment hoax_impeachment_hoax_left dems | | 320 | todd - chuck todd - sleepy eyes - chuck - eyes | 28 | 320_todd_chuck todd_sleepy eyes_chuck | | 321 | fake news - fake - news - fake news fake news - news fake news | 28 | 321_fake news_fake_news_fake news fake news | | 322 | lady - woman - great lady - great woman - pam bondi | 27 | 322_lady_woman_great lady_great woman | | 323 | ventilators - needed ventilators - hospitals - republic - moreno | 27 | 323_ventilators_needed ventilators_hospitals_republic | | 324 | certificate - birth certificate - birth - born - kenya | 27 | 324_certificate_birth certificate_birth_born | | 325 | ron - desantis - ron desantis - rick scott - rick | 27 | 325_ron_desantis_ron desantis_rick scott | | 326 | mikepence - governor mike - governor mikepence - pence - mike pence | 27 | 326_mikepence_governor mike_governor mikepence_pence | | 327 | isis - abu bakr - newsmaxmedia wandacarruthers - wandacarruthers - bakr | 27 | 327_isis_abu bakr_newsmaxmedia wandacarruthers_wandacarruthers | | 328 | hydroxychloroquine - yale - cavuto - risch - dr | 27 | 328_hydroxychloroquine_yale_cavuto_risch | | 329 | nato - germany - europe - pay - paying | 27 | 329_nato_germany_europe_pay | | 330 | dc - rolling thunder - thunder - washington dc - dc january | 27 | 330_dc_rolling thunder_thunder_washington dc | | 331 | tmobile - johnlegere - service - sucks - customer | 27 | 331_tmobile_johnlegere_service_sucks | | 332 | washington - washington establishment - washington politician - establishment elected - play rules | 27 | 332_washington_washington establishment_washington politician_establishment elected | | 333 | geraldo - realdonaldtrump geraldo - geraldorivera - geraldo rivera - terramarkov16 | 26 | 333_geraldo_realdonaldtrump geraldo_geraldorivera_geraldo rivera | | 334 | mikepence - rt mikepence - president realdonaldtrumps - realdonaldtrumps - mikepence amp realdonaldtrump | 26 | 334_mikepence_rt mikepence_president realdonaldtrumps_realdonaldtrumps | | 335 | ndaa - national defense - pay raise - senate passed - troops | 26 | 335_ndaa_national defense_pay raise_senate passed | | 336 | nice words - cnn thank - thank nice words - words - thank nice | 26 | 336_nice words_cnn thank_thank nice words_words | | 337 | graydon - graydon carter - carter - vanityfair - restaurants | 26 | 337_graydon_graydon carter_carter_vanityfair | | 338 | keith - pennsylvania - lloyd - total endorsement - fred keller | 26 | 338_keith_pennsylvania_lloyd_total endorsement | | 339 | real estate - estate - real - love real estate - tangible | 26 | 339_real estate_estate_real_love real estate | | 340 | economy track - economy - track - economic disaster - president genius | 26 | 340_economy track_economy_track_economic disaster | | 341 | thank iowa - iowa - iacaucus - caucusfortrump - finder httpstcoanvtczqfoq | 26 | 341_thank iowa_iowa_iacaucus_caucusfortrump | | 342 | shirts - ties - shirts ties - ties shirts - macys | 26 | 342_shirts_ties_shirts ties_ties shirts | | 343 | black - black community - african - african americans - community | 26 | 343_black_black community_african_african americans | | 344 | kag2020 - way new - thank north - new mexico - thank | 26 | 344_kag2020_way new_thank north_new mexico | | 345 | witch - witch hunt - hunt - hunt continues - witch hunt continues | 26 | 345_witch_witch hunt_hunt_hunt continues | | 346 | women - womens - coin - national security council - watch ivankatrump | 25 | 346_women_womens_coin_national security council | | 347 | cuban - mark cuban - mark - queynewton - playoffs | 25 | 347_cuban_mark cuban_mark_queynewton | | 348 | toronto - canada - canadian - trumptoronto - canadians | 25 | 348_toronto_canada_canadian_trumptoronto | | 349 | tea party - tea - party - attack tea party - attack tea | 25 | 349_tea party_tea_party_attack tea party | | 350 | dumbest man - brian williams - dumbest - williams - lawrence | 25 | 350_dumbest man_brian williams_dumbest_williams | | 351 | psycho - mika - joe scarborough - scarborough - morningjoe | 25 | 351_psycho_mika_joe scarborough_scarborough | | 352 | lombardi - vince lombardi - vince - quit - youre loser | 25 | 352_lombardi_vince lombardi_vince_quit | | 353 | hillaryclinton - bigleaguetruth - debate bigleaguetruth - debates - debates2016 | 25 | 353_hillaryclinton_bigleaguetruth_debate bigleaguetruth_debates | | 354 | whitehouse - rt whitehouse - just left white - just left white house - join live | 25 | 354_whitehouse_rt whitehouse_just left white_just left white house | | 355 | spied - spied campaign - campaign got - spied campaign got - campaign | 25 | 355_spied_spied campaign_campaign got_spied campaign got | | 356 | cohen - michael cohen - lawyer - michael - campaign finance | 25 | 356_cohen_michael cohen_lawyer_michael | | 357 | libya - embassy - libyan - attack - attack embassy | 25 | 357_libya_embassy_libyan_attack | | 358 | dubai - damac - estates - launch trump - akoya | 25 | 358_dubai_damac_estates_launch trump | | 359 | collar - jobs blue - barackobama - barackobama claims - blue collar | 24 | 359_collar_jobs blue_barackobama_barackobama claims | | 360 | amnesty - executive action - executive - executive amnesty - congress use | 24 | 360_amnesty_executive action_executive_executive amnesty | | 361 | obamagate - realdonaldtrump obamagate - obamagate rt - rt realdonaldtrump obamagate - barack | 24 | 361_obamagate_realdonaldtrump obamagate_obamagate rt_rt realdonaldtrump obamagate | | 362 | emmys - seth - seth meyers - meyers - emmy | 24 | 362_emmys_seth_seth meyers_meyers | | 363 | wine - charlottesville - trumpwinery - highest rated - va | 24 | 363_wine_charlottesville_trumpwinery_highest rated | | 364 | florida - fl - rt teamtrump president realdonaldtrump - teamtrump president realdonaldtrump - phoenix | 24 | 364_florida_fl_rt teamtrump president realdonaldtrump_teamtrump president realdonaldtrump | | 365 | cadillacchamp - cadillac - wgc - championship - cadillac championship | 24 | 365_cadillacchamp_cadillac_wgc_championship | | 366 | mac - miller - song - millers - mac millers | 24 | 366_mac_miller_song_millers | | 367 | rt scavino45 - scavino45 - statedept secpompeo - scavino45 congratulations - rt scavino45 congratulations | 24 | 367_rt scavino45_scavino45_statedept secpompeo_scavino45 congratulations | | 368 | sleepy - sleepy joe - joe - joe hiden - hiden | 24 | 368_sleepy_sleepy joe_joe_joe hiden | | 369 | let make - work hard - let - make america great - hard | 24 | 369_let make_work hard_let_make america great | | 370 | debt - trillion debt - trillion - usa rich - owe | 24 | 370_debt_trillion debt_trillion_usa rich | | 371 | christmas - merry christmas - merry - christmas tree - tree | 23 | 371_christmas_merry christmas_merry_christmas tree | | 372 | henry mcmaster - henry - mcmaster - carolina - governor henry mcmaster | 23 | 372_henry mcmaster_henry_mcmaster_carolina | | 373 | tea - tea party - tea party convention - party convention - sc | 23 | 373_tea_tea party_tea party convention_party convention | | 374 | india - modi - narendramodi - india rt - rt narendramodi | 23 | 374_india_modi_narendramodi_india rt | | 375 | kentucky - mattbevin - governor mattbevin - matt - cameron | 23 | 375_kentucky_mattbevin_governor mattbevin_matt | | 376 | politico - dishonest - reporting - losing - going business | 23 | 376_politico_dishonest_reporting_losing | | 377 | food stamps - stamps - food - food stamp - stamp | 23 | 377_food stamps_stamps_food_food stamp | | 378 | shooting - law enforcement - enforcement - el paso - paso | 23 | 378_shooting_law enforcement_enforcement_el paso | | 379 | ronald reagan - ronald - reagan - president ronald - president ronald reagan | 23 | 379_ronald reagan_ronald_reagan_president ronald | | 380 | california - fires - forest - wildfires - management | 22 | 380_california_fires_forest_wildfires | | 381 | flags - flag - american flag - burn - powmia | 22 | 381_flags_flag_american flag_burn | | 382 | reagan - ronald - ronald reagan - best president - president reagan | 22 | 382_reagan_ronald_ronald reagan_best president | | 383 | jobless claims - jobless - claims - weekly jobless - weekly jobless claims | 22 | 383_jobless claims_jobless_claims_weekly jobless | | 384 | henry ford - ford - henry - think thing - youre right | 22 | 384_henry ford_ford_henry_think thing | | 385 | filibuster - 51 - 60 - need 60 - 51 votes | 22 | 385_filibuster_51_60_need 60 | | 386 | make america safe - america safe - america safe great - safe - safe great | 22 | 386_make america safe_america safe_america safe great_safe | | 387 | barr - general barr - attorney general barr - attorney general - attorney | 22 | 387_barr_general barr_attorney general barr_attorney general | | 388 | leadership - leadership need - incompetent leadership - great need - leadership country | 22 | 388_leadership_leadership need_incompetent leadership_great need | | 389 | trumpvlog - questions - tweet questions - todays trumpvlog - todays | 22 | 389_trumpvlog_questions_tweet questions_todays trumpvlog | | 390 | christmas - merry christmas - merry - christmas happy - christmas christmas | 22 | 390_christmas_merry christmas_merry_christmas happy | | 391 | brennan - john brennan - liar - james clapper - intelligence | 22 | 391_brennan_john brennan_liar_james clapper | | 392 | httpstcogsfsghkmdm - httpstcoznopfstnn3 - httpstcomu6grsamv9 - httpstcogsfsghkmdm httpstcoznopfstnn3 - httpstcogsfsghkmdm httpstcomu6grsamv9 | 22 | 392_httpstcogsfsghkmdm_httpstcoznopfstnn3_httpstcomu6grsamv9_httpstcogsfsghkmdm httpstcoznopfstnn3 | | 393 | federal judge - unconstitutional - judge - gov - ruled | 22 | 393_federal judge_unconstitutional_judge_gov | | 394 | pete rose - hall fame - pete - fame - baseball | 21 | 394_pete rose_hall fame_pete_fame | | 395 | ban - travel ban - travel - need travel - far larger | 21 | 395_ban_travel ban_travel_need travel | | 396 | presidential harassment - harassment - presidential - harassment httpstcog3f6qbnsma - brought charges | 21 | 396_presidential harassment_harassment_presidential_harassment httpstcog3f6qbnsma | | 397 | greatest witch - greatest witch hunt - witch hunt - hunt - witch | 21 | 397_greatest witch_greatest witch hunt_witch hunt_hunt | | 398 | qvc - jewelry - melania - wife melania - est | 21 | 398_qvc_jewelry_melania_wife melania | | 399 | amazon - retailers - post - post office - postal | 21 | 399_amazon_retailers_post_post office | | 400 | law amp order - amp order - law amp - law - order | 21 | 400_law amp order_amp order_law amp_law | | 401 | macys - profiling - racial - racial profiling - boycott | 21 | 401_macys_profiling_racial_racial profiling | | 402 | rand - rand paul - paul - kraftywurker - bye | 21 | 402_rand_rand paul_paul_kraftywurker | | 403 | dbongino - rt dbongino - whowpro - especially rt - men know | 21 | 403_dbongino_rt dbongino_whowpro_especially rt | | 404 | criminal justice - criminal justice reform - justice reform - reform - criminal | 21 | 404_criminal justice_criminal justice reform_justice reform_reform | | 405 | olympic - olympics - gold - coach - winning gold | 21 | 405_olympic_olympics_gold_coach | | 406 | rt erictrump - erictrump - trueamerica1st - rt trueamerica1st - job erictrump | 21 | 406_rt erictrump_erictrump_trueamerica1st_rt trueamerica1st | | 407 | lamestream - lamestream media - media - media totally corrupt - lamestream media totally | 20 | 407_lamestream_lamestream media_media_media totally corrupt | | 408 | flights - stop flights - west africa - flights west - africa | 20 | 408_flights_stop flights_west africa_flights west | | 409 | maralago - palm - palm beach - beach - club | 20 | 409_maralago_palm_palm beach_beach | | 410 | fred trump - fred - macleod - mary - good advice | 20 | 410_fred trump_fred_macleod_mary | | 411 | scam - giant scam - total scam - giant - great scam | 20 | 411_scam_giant scam_total scam_giant | | 412 | infrastructure - roads - train - fast train - trillion | 20 | 412_infrastructure_roads_train_fast train | | 413 | death penalty - penalty - death - trial death - trial death penalty | 20 | 413_death penalty_penalty_death_trial death | | 414 | energy - passion - energy donald trump - energy donald - thequote | 20 | 414_energy_passion_energy donald trump_energy donald | | 415 | promises - promises promises - promises kept - promises promises kept - kept | 20 | 415_promises_promises promises_promises kept_promises promises kept | | 416 | scotland - alexsalmond - aberdeenshire - alexsalmond rt - scotland great | 20 | 416_scotland_alexsalmond_aberdeenshire_alexsalmond rt | | 417 | autism - vaccinations - vaccines - doses - shots | 20 | 417_autism_vaccinations_vaccines_doses | | 418 | collusion obstruction - collusion - obstruction - obstruction collusion - underlying crime | 20 | 418_collusion obstruction_collusion_obstruction_obstruction collusion | | 419 | swamp - draining swamp - draining - drain swamp - drain | 20 | 419_swamp_draining swamp_draining_drain swamp | | 420 | pardon - thanksgiving turkey - pardoned - pardons - thanksgiving | 20 | 420_pardon_thanksgiving turkey_pardoned_pardons | | 421 | nfl - mikeandmike - espngreeny - espn - morning mikeandmike | 20 | 421_nfl_mikeandmike_espngreeny_espn | | 422 | medal freedom - medal - presidential medal - presidential medal freedom - freedom | 20 | 422_medal freedom_medal_presidential medal_presidential medal freedom | | 423 | beat hillary - beat - hillary - beat clinton - breaking records | 20 | 423_beat hillary_beat_hillary_beat clinton | | 424 | thoughts prayers - prayers - thoughts - shooting - officers shot | 20 | 424_thoughts prayers_prayers_thoughts_shooting | | 425 | florida power - disaster declaration - declaration - florida power amp - power amp | 20 | 425_florida power_disaster declaration_declaration_florida power amp | | 426 | wharton - great school - school - grads - wharton school | 19 | 426_wharton_great school_school_grads | | 427 | agschneiderman - lightweight agschneiderman - lightweight - moreland - jcope | 19 | 427_agschneiderman_lightweight agschneiderman_lightweight_moreland | | 428 | whistleblower - schiff - gave false information - gave false - false information | 19 | 428_whistleblower_schiff_gave false information_gave false | | 429 | award - diamond award - boone - pickens - receiving | 19 | 429_award_diamond award_boone_pickens | | 430 | years office - president history - accomplished - treated badly democrats - years president | 19 | 430_years office_president history_accomplished_treated badly democrats | | 431 | evangelicals - evangelical - robertjeffress - religion - evangelical christians | 19 | 431_evangelicals_evangelical_robertjeffress_religion | | 432 | socialist - socialism - socialists - country rt teamtrump - country rt teamtrump president realdonaldtrump | 19 | 432_socialist_socialism_socialists_country rt teamtrump | | 433 | fireworks - salute america - july - 4th - salute | 19 | 433_fireworks_salute america_july_4th | | 434 | nfl - boring - nfl games - games - soft | 19 | 434_nfl_boring_nfl games_games | | 435 | wallace - chris wallace - chris - mike wallace - mike | 19 | 435_wallace_chris wallace_chris_mike wallace | | 436 | sugar - lordsugar - dopey sugar - sugar lordsugar - dopey | 19 | 436_sugar_lordsugar_dopey sugar_sugar lordsugar | | 437 | crookedhillary - bigleaguetruth - world bigleaguetruth debates2016 - fdn - failed world bigleaguetruth | 18 | 437_crookedhillary_bigleaguetruth_world bigleaguetruth debates2016_fdn | | 438 | rt seanhannity - seanhannity - seanhannity breaking - defending realdonaldtrump - enjoy monday | 18 | 438_rt seanhannity_seanhannity_seanhannity breaking_defending realdonaldtrump | | 439 | 230 - section 230 - section - termination - repeal section 230 | 18 | 439_230_section 230_section_termination | | 440 | macys - shop - macys amp - shop macys - houstongunn | 18 | 440_macys_shop_macys amp_shop macys | | 441 | woodward - bob woodward - bob - quotes - book | 18 | 441_woodward_bob woodward_bob_quotes | | 442 | paul ryan - ryan - paul - lame duck - duck | 18 | 442_paul ryan_ryan_paul_lame duck | | 443 | illegally - enter - illegal aliens - country illegally - aliens | 18 | 443_illegally_enter_illegal aliens_country illegally | | 444 | letterman - david letterman - letterman lateshow - lateshow - david | 18 | 444_letterman_david letterman_letterman lateshow_lateshow | | 445 | paris - accord - france - agreement - announcing decision | 18 | 445_paris_accord_france_agreement | | 446 | jimjordan - rt jimjordan - jordan - ranking member - rt mikehahn | 18 | 446_jimjordan_rt jimjordan_jordan_ranking member | | 447 | caravan - caravans - heading southern border - heading southern - large caravans | 18 | 447_caravan_caravans_heading southern border_heading southern | | 448 | patriots - american patriots - great patriots - beautiful evening - evening | 18 | 448_patriots_american patriots_great patriots_beautiful evening | | 449 | roger - roger stone - stone - illegal witch hunt - illegal witch | 18 | 449_roger_roger stone_stone_illegal witch hunt | | 450 | youre fired - fired - words youre fired - words youre - youre | 18 | 450_youre fired_fired_words youre fired_words youre | | 451 | server - dnc server - dnc - fbi - wasserman schultz | 18 | 451_server_dnc server_dnc_fbi | | 452 | boston - killer - obama care - offend - innocent | 18 | 452_boston_killer_obama care_offend | | 453 | rush - limbaugh - rush limbaugh - rushlimbaugh - great rush | 18 | 453_rush_limbaugh_rush limbaugh_rushlimbaugh | | 454 | int - amp making america great - amp making america - amp making - int foxnews | 18 | 454_int_amp making america great_amp making america_amp making | | 455 | troy - troybalderson - ohio - vote troy - great job congressman | 18 | 455_troy_troybalderson_ohio_vote troy | | 456 | tweeting - live tweeting - debate tonight - live - tweeting live | 18 | 456_tweeting_live tweeting_debate tonight_live | | 457 | sexual - rape - sexual assaults - sexual assault - 26000 | 18 | 457_sexual_rape_sexual assaults_sexual assault | | 458 | rink - skating - central park - wollman - central | 17 | 458_rink_skating_central park_wollman | | 459 | vaccine - vaccine development - development - warp - operation warp | 17 | 459_vaccine_vaccine development_development_warp | | 460 | prevail - amp united prevail - strong amp united prevail - httpstcot6ucyapriy - strong amp united | 17 | 460_prevail_amp united prevail_strong amp united prevail_httpstcot6ucyapriy | | 461 | bin - bin laden - laden - seals - bin ladens | 17 | 461_bin_bin laden_laden_seals | | 462 | sheep - counting - contest - bought trump - realdonaldtrump just bought | 17 | 462_sheep_counting_contest_bought trump | | 463 | racist - incompetent rt - blacks - didnt say - tweets | 17 | 463_racist_incompetent rt_blacks_didnt say | | 464 | trump2016 thank - trump2016 - 2016 thanks - iconic - realdonaldtrump trump2016 | 17 | 464_trump2016 thank_trump2016_2016 thanks_iconic | | 465 | nato - nato countries - 130 billion - 130 - pay fair share | 17 | 465_nato_nato countries_130 billion_130 | | 466 | cancer - pls - realdonaldtrump pls - terminal - 500k | 17 | 466_cancer_pls_realdonaldtrump pls_terminal | | 467 | americafirst - thank americafirst - imwithyou - americafirst imwithyou - thank supportamericafirst | 17 | 467_americafirst_thank americafirst_imwithyou_americafirst imwithyou | | 468 | maher - comedian - canned - world trade - praised | 17 | 468_maher_comedian_canned_world trade | | 469 | apologize - apology - danamira realdonaldtrump - danamira - fake controversies whipped perpetually | 17 | 469_apologize_apology_danamira realdonaldtrump_danamira | | 470 | stay tuned - tuned - stay - realdonaldtrump announcing - realdonaldtrump whats | 17 | 470_stay tuned_tuned_stay_realdonaldtrump announcing | | 471 | yossimelman - rt yossimelman - httpstcorljgsc5wlc - comp rt - comp | 17 | 471_yossimelman_rt yossimelman_httpstcorljgsc5wlc_comp rt | | 472 | watergate - obamagate - watergate look - watergate look like - small potatoes | 17 | 472_watergate_obamagate_watergate look_watergate look like | | 473 | suburbs - suburban - low income - suburban women - housing | 17 | 473_suburbs_suburban_low income_suburban women | | 474 | cocaine - heroin - pounds - overdoses - laredo | 17 | 474_cocaine_heroin_pounds_overdoses | | 475 | approved - ambassadors - slow - cabinet - obstruct | 17 | 475_approved_ambassadors_slow_cabinet | | 476 | jewish friends - israel world - jewish - hanukkah - new year | 17 | 476_jewish friends_israel world_jewish_hanukkah | | 477 | home - home favorite - realdonaldtrump home - movie - favorite | 17 | 477_home_home favorite_realdonaldtrump home_movie | | 478 | vanity - vanity fair - vanityfair - circulation - magazine | 16 | 478_vanity_vanity fair_vanityfair_circulation | | 479 | dossier - fake dossier - dossier fbi - fbi - clinton campaign | 16 | 479_dossier_fake dossier_dossier fbi_fbi | | 480 | club growth - club - 1000000 - growth - negative ads | 16 | 480_club growth_club_1000000_growth | | 481 | haters - haters losers - losers - hate - realdonaldtrump hate | 16 | 481_haters_haters losers_losers_hate | | 482 | impeach - committed democrats - republican president - president crime - crime committed | 16 | 482_impeach_committed democrats_republican president_president crime | | 483 | maga - vote maga - debate polls - maga rt - trumptrain | 16 | 483_maga_vote maga_debate polls_maga rt | | 484 | obamabiden administration corrupt - administration corrupt - obamabiden administration - corrupt history - administration corrupt history | 16 | 484_obamabiden administration corrupt_administration corrupt_obamabiden administration_corrupt history | | 485 | park - brutally - innocent - werent - documentary | 16 | 485_park_brutally_innocent_werent | | 486 | swamp - drain - drain swamp - washington swamp - draining swamp | 16 | 486_swamp_drain_drain swamp_washington swamp | | 487 | dennisrodman - dennis - famer - hall famer - celebapprentice | 16 | 487_dennisrodman_dennis_famer_hall famer | | 488 | atlantic city - atlantic - city - timing - years ago great | 16 | 488_atlantic city_atlantic_city_timing | | 489 | debates - hillaryclinton - hillaryclinton white - realdonaldtrump vs - vs hillaryclinton white | 16 | 489_debates_hillaryclinton_hillaryclinton white_realdonaldtrump vs | | 490 | macmiller - song - song donald trump - song donald - macmillers | 16 | 490_macmiller_song_song donald trump_song donald | | 491 | melania send - melania - deepest condolences - deepest - condolences | 16 | 491_melania send_melania_deepest condolences_deepest | | 492 | polls - trump surging - ahead - trump leading - surging | 16 | 492_polls_trump surging_ahead_trump leading | | 493 | media bias - bias - measure - medias - heightened | 16 | 493_media bias_bias_measure_medias | | 494 | apprenticenbc - season premiere - premiere apprenticenbc - premiere - season premiere apprenticenbc | 16 | 494_apprenticenbc_season premiere_premiere apprenticenbc_premiere | | 495 | turnberry - trump turnberry - trumpturnberry - turnberrybuzz - resort world | 16 | 495_turnberry_trump turnberry_trumpturnberry_turnberrybuzz | | 496 | trump remember - donald trump httptinyurlcompqpfvm - trump httptinyurlcompqpfvm - httptinyurlcompqpfvm - game trump | 16 | 496_trump remember_donald trump httptinyurlcompqpfvm_trump httptinyurlcompqpfvm_httptinyurlcompqpfvm | | 497 | pinehurst - mattginellagc - 400 play - pay 400 play - pay 400 | 16 | 497_pinehurst_mattginellagc_400 play_pay 400 play | | 498 | hagel - chuck hagel - chuck - sod - sec defense | 16 | 498_hagel_chuck hagel_chuck_sod | | 499 | happy national - farmers - presidents day - happy - great day | 16 | 499_happy national_farmers_presidents day_happy | | 500 | ed henry - ed - henry - levin - mark levin | 16 | 500_ed henry_ed_henry_levin | | 501 | texas - texas love - men amp women working - amp women working - hard tomorrow | 16 | 501_texas_texas love_men amp women working_amp women working | | 502 | arguments - hours - hours rt - democrats spent - house democrats | 16 | 502_arguments_hours_hours rt_democrats spent | | 503 | answered - questions - video - todays video - facebook | 15 | 503_answered_questions_video_todays video | | 504 | thank west virginia - thank west - kansas - thank tennessee - virginia | 15 | 504_thank west virginia_thank west_kansas_thank tennessee | | 505 | american workers - workers - rt ivankatrump - pledge americas workers - pledge americas | 15 | 505_american workers_workers_rt ivankatrump_pledge americas workers | | 506 | read transcripts - transcripts - read transcript - transcript - read | 15 | 506_read transcripts_transcripts_read transcript_transcript | | 507 | rt realdonaldtrump fake news - rt realdonaldtrump fake - realdonaldtrump fake news - realdonaldtrump fake - fake news | 15 | 507_rt realdonaldtrump fake news_rt realdonaldtrump fake_realdonaldtrump fake news_realdonaldtrump fake | | 508 | radical left democrats - left democrats - radical left - radical - left | 15 | 508_radical left democrats_left democrats_radical left_radical | | 509 | rt jasoninthehouse - jasoninthehouse - jason - did country - gameexpress1 | 15 | 509_rt jasoninthehouse_jasoninthehouse_jason_did country | | 510 | potus realdonaldtrump - rt teamtrump - realdonaldtrump acting - team trumps - live president donald trump | 15 | 510_potus realdonaldtrump_rt teamtrump_realdonaldtrump acting_team trumps | | 511 | tom - tom tiffany - tiffany - people wisconsin - wisconsin | 15 | 511_tom_tom tiffany_tiffany_people wisconsin | | 512 | maxine waters - maxine - waters - michele - bachmann | 15 | 512_maxine waters_maxine_waters_michele | | 513 | eliot - spitzer - eliot spitzer - comptroller - cents | 15 | 513_eliot_spitzer_eliot spitzer_comptroller | | 514 | hispanic - hispanic americans - hispanic heritage - heritage - cincodemayo | 15 | 514_hispanic_hispanic americans_hispanic heritage_heritage | | 515 | negotiation - negotiation art - negotiation art treat - negotiation art treat like - art treat | 15 | 515_negotiation_negotiation art_negotiation art treat_negotiation art treat like | | 516 | prize - nobel - peace - trump nominated - nominated | 15 | 516_prize_nobel_peace_trump nominated | | 517 | video - isis - hillary - lied - hillary clinton lied | 15 | 517_video_isis_hillary_lied | | 518 | drudgereport - rt drudgereport - httptcofokcasbvun - wow great - drudges | 15 | 518_drudgereport_rt drudgereport_httptcofokcasbvun_wow great | | 519 | money big - playing game - excitement - motivation - score | 15 | 519_money big_playing game_excitement_motivation | | 520 | followers - twitter followers - twitter - just passed - passed | 15 | 520_followers_twitter followers_twitter_just passed | | 521 | kamala - kamala harris - harris - socialist - vision country | 15 | 521_kamala_kamala harris_harris_socialist | | 522 | ceiling - debt ceiling - debt - supercommittee - republicans | 15 | 522_ceiling_debt ceiling_debt_supercommittee | | 523 | oil - taking oil - china taking - iraq - china | 15 | 523_oil_taking oil_china taking_iraq | | 524 | south carolina - south carolina makeamericagreatagain trump2016 - carolina makeamericagreatagain trump2016 - south carolina makeamericagreatagain - carolina makeamericagreatagain | 15 | 524_south carolina_south carolina makeamericagreatagain trump2016_carolina makeamericagreatagain trump2016_south carolina makeamericagreatagain | | 525 | negotiating - knows think - want negotiating - make smart - dealmaker | 15 | 525_negotiating_knows think_want negotiating_make smart | | 526 | asktrump - twitternyc - just wrapped - usminority great - marieleff | 15 | 526_asktrump_twitternyc_just wrapped_usminority great | | 527 | worry wont - dont worry wont - httpstcovxrgfrfejm - httpstco5vlnzrg6gn - just didnt httpstco9t50nupkdy | 15 | 527_worry wont_dont worry wont_httpstcovxrgfrfejm_httpstco5vlnzrg6gn | | 528 | thank maga kag2020 - thank maga - maga kag2020 - kag - maga kag | 15 | 528_thank maga kag2020_thank maga_maga kag2020_kag | | 529 | 730 - squawkcnbc - trumptuesday - trumptuesday squawkcnbc - tune tomorrow | 15 | 529_730_squawkcnbc_trumptuesday_trumptuesday squawkcnbc | | 530 | president think - great thank - wish - realdonaldtrump id love - president turn | 15 | 530_president think_great thank_wish_realdonaldtrump id love | | 531 | thank mike - mike - great going - like michael - great spend time | 15 | 531_thank mike_mike_great going_like michael | | 532 | pack - pack court - court - court radical - court radical left | 15 | 532_pack_pack court_court_court radical | | 533 | solution problem - look solution problem - look solution - solution - victorious look solution | 15 | 533_solution problem_look solution problem_look solution_solution | | 534 | administration history - years existence - administration history country - accomplished - administration | 15 | 534_administration history_years existence_administration history country_accomplished | | 535 | easter - happy easter - great easter - happy - great day | 15 | 535_easter_happy easter_great easter_happy | | 536 | pledge - pledge allegiance - allegiance - god pledge allegiance - pledgetoamericasworkers | 14 | 536_pledge_pledge allegiance_allegiance_god pledge allegiance | | 537 | lakers - dwight - howard - houston - shaq | 14 | 537_lakers_dwight_howard_houston | | 538 | mexico - wall - pay wall - mexico pay - mexico pay wall | 14 | 538_mexico_wall_pay wall_mexico pay | | 539 | oseanessytweet - oseanessytweet greggutfeld - greggutfeld - newsmax - newsmax oann | 14 | 539_oseanessytweet_oseanessytweet greggutfeld_greggutfeld_newsmax | | 540 | jacknicklaus - nicklaus - jack nicklaus - grand opening - jack | 14 | 540_jacknicklaus_nicklaus_jack nicklaus_grand opening | | 541 | hardballchris - matthews - chris - chris matthews - completely lost | 14 | 541_hardballchris_matthews_chris_chris matthews | | 542 | wwehof - sammartinobruno - sold crowd - schwarzenegger - fellow | 14 | 542_wwehof_sammartinobruno_sold crowd_schwarzenegger | | 543 | jim - jim great - thank jim - pandering - incredibly stated jim | 14 | 543_jim_jim great_thank jim_pandering | | 544 | happy thanksgiving - thanksgiving - happy - make america great happy - america great happy | 14 | 544_happy thanksgiving_thanksgiving_happy_make america great happy | | 545 | achievers - achievement - beginning - times - forward | 14 | 545_achievers_achievement_beginning_times | | 546 | vietnam - hanoi - hanoi vietnam - philippines - great day meetings | 14 | 546_vietnam_hanoi_hanoi vietnam_philippines | | 547 | stjude - erictrumpfdn - research hospital - childrens research hospital - childrens research | 14 | 547_stjude_erictrumpfdn_research hospital_childrens research hospital | | 548 | neverforget - dday75thanniversary - dday75thanniversary dday75 - dday75 - salutetoamericajuly4th | 14 | 548_neverforget_dday75thanniversary_dday75thanniversary dday75_dday75 | | 549 | apologize - media apologize - collusion delusion - delusion - apologized | 14 | 549_apologize_media apologize_collusion delusion_delusion | | 550 | polls trump - peggynoonannyc - trump winning - polls - national poll | 14 | 550_polls trump_peggynoonannyc_trump winning_polls | | 551 | kemp - brian kemp - brian - georgia - governor | 14 | 551_kemp_brian kemp_brian_georgia | | 552 | vancouver - gonna awesome - trump tower vancouver - realdonaldtrump welcome - trumpvancouver | 14 | 552_vancouver_gonna awesome_trump tower vancouver_realdonaldtrump welcome | | 553 | nafta - canada - terminate - nafta worst trade - nafta worst | 14 | 553_nafta_canada_terminate_nafta worst trade | | 554 | south carolina - south - carolina - won south - votetrumpsc | 14 | 554_south carolina_south_carolina_won south | | 555 | rt pollwatch2020 - pollwatch2020 - democracy institute - institute - trump 47 | 14 | 555_rt pollwatch2020_pollwatch2020_democracy institute_institute | | 556 | trumpadvice - tbt trump - real fan - fan trump - tbt | 14 | 556_trumpadvice_tbt trump_real fan_fan trump | | 557 | great story - article - wonderful article - nice story - nice article | 14 | 557_great story_article_wonderful article_nice story | | 558 | burisma - biden campaign - heat - biden - investigation | 14 | 558_burisma_biden campaign_heat_biden | | 559 | celebapprentice - mvp - impact - brand - celebapprentice flashback | 14 | 559_celebapprentice_mvp_impact_brand | | 560 | radical islam - islam - refugees - syrian - radical | 13 | 560_radical islam_islam_refugees_syrian | | 561 | georgia - kloeffler - perduesenate - runoff - sendavidperdue | 13 | 561_georgia_kloeffler_perduesenate_runoff | | 562 | winners - person reacts new twist fate - new twist fate - winners losers person reacts - winners losers person reacts new | 13 | 562_winners_person reacts new twist fate_new twist fate_winners losers person reacts | | 563 | way soon - soon - way - way kag2020 - come immediately | 13 | 563_way soon_soon_way_way kag2020 | | 564 | arabia - saudi arabia - saudi - pay - paying | 13 | 564_arabia_saudi arabia_saudi_pay | | 565 | dan bishop - bishop - dan - north carolina - vote dan | 13 | 565_dan bishop_bishop_dan_north carolina | | 566 | rt judicialwatch - jw announced - jw - judicialwatch - rt judicialwatch jw | 13 | 566_rt judicialwatch_jw announced_jw_judicialwatch | | 567 | rt danscavino - danscavino - kag2020 - trumppence2020 - danscavino happening | 13 | 567_rt danscavino_danscavino_kag2020_trumppence2020 | | 568 | blumenthal - vietnam - war hero - connecticut - nang | 13 | 568_blumenthal_vietnam_war hero_connecticut | | 569 | soleimani - terrorist - media democrat - really matter - attack terrorist | 13 | 569_soleimani_terrorist_media democrat_really matter | | 570 | armywpfootball - commanderinchiefs - afacademy - commanderinchiefs trophy - armynavygame | 13 | 570_armywpfootball_commanderinchiefs_afacademy_commanderinchiefs trophy | | 571 | chucktodd - chuckwoolery - chucktodd meetthepress - rt chuckwoolery - meetthepress | 13 | 571_chucktodd_chuckwoolery_chucktodd meetthepress_rt chuckwoolery | | 572 | hostage - plane - hostages - great michael - held hostage | 13 | 572_hostage_plane_hostages_great michael | | 573 | corker - bob corker - senator bob - bob - tennessee | 13 | 573_corker_bob corker_senator bob_bob | | 574 | roberts - justice roberts - justice - bushes - obamacare | 13 | 574_roberts_justice roberts_justice_bushes | | 575 | interview yesterday - interview - interview night - discussing ows - foxandfriends interview yesterday | 13 | 575_interview yesterday_interview_interview night_discussing ows | | 576 | mueller - muellers - wrongdoing - rosenstein - mueller report | 13 | 576_mueller_muellers_wrongdoing_rosenstein | | 577 | abc2020 - 10pme - wife melaniatrump - tonight 10pme enjoy - great flotus melania | 13 | 577_abc2020_10pme_wife melaniatrump_tonight 10pme enjoy | | 578 | cities - cities run - run democrats - democrat run - rioters | 13 | 578_cities_cities run_run democrats_democrat run | | 579 | mcuban - dummy mcuban - tee - stupid people - dummy | 13 | 579_mcuban_dummy mcuban_tee_stupid people | | 580 | just returned - returned - whitehouse great - whitehouse great evening - beautiful whitehouse | 13 | 580_just returned_returned_whitehouse great_whitehouse great evening | | 581 | iowa - iowa poll - new cnn poll - new cnn - leading | 13 | 581_iowa_iowa poll_new cnn poll_new cnn | | 582 | penn state - penn - ncaa - hurt great - state university | 13 | 582_penn state_penn_ncaa_hurt great | | 583 | new year - happy new year - happy new - happy healthy - healthy | 13 | 583_new year_happy new year_happy new_happy healthy | | 584 | ad - tv ad - misleading - standards - ruled | 13 | 584_ad_tv ad_misleading_standards | | 585 | black - african - joe biden - joe - community | 13 | 585_black_african_joe biden_joe | | 586 | sean - sean spicer - spicer - dancing stars - dancing | 13 | 586_sean_sean spicer_spicer_dancing stars | | 587 | cher - ugliest - song - people dont care - things republican | 13 | 587_cher_ugliest_song_people dont care | | 588 | new trial - jackson - judge - jury - trial | 13 | 588_new trial_jackson_judge_jury | | 589 | oreillyfactor - oreillyfactor tonight - oreillyfactor tonight 8pm - 8pm - 800 pm enjoy | 13 | 589_oreillyfactor_oreillyfactor tonight_oreillyfactor tonight 8pm_8pm | | 590 | univision - fusion - trade deals - apologized - funny just | 12 | 590_univision_fusion_trade deals_apologized | | 591 | bryant - racist - hbo - dumb - dope | 12 | 591_bryant_racist_hbo_dumb | | 592 | usfda - stevefda - cure - worse problem - cure worse problem | 12 | 592_usfda_stevefda_cure_worse problem | | 593 | pope - pope francis - vatican - francis - catholic | 12 | 593_pope_pope francis_vatican_francis | | 594 | wisconsin - walker - scott walker - justice daniel kelly - justice daniel | 12 | 594_wisconsin_walker_scott walker_justice daniel kelly | | 595 | nh - nhpolitics - new hampshire - hampshire - people nh | 12 | 595_nh_nhpolitics_new hampshire_hampshire | | 596 | press conference today - thank having - having white house - having white - house press conference | 12 | 596_press conference today_thank having_having white house_having white | | 597 | great speech - speech - speech tonight - speech republican - awesome speech | 12 | 597_great speech_speech_speech tonight_speech republican | | 598 | reopen - reopening - mnuchin - safely - prevent | 12 | 598_reopen_reopening_mnuchin_safely | | 599 | declassified - russia collusion narrative - declassified documents - greggjarrett newly - rt greggjarrett newly | 12 | 599_declassified_russia collusion narrative_declassified documents_greggjarrett newly | | 600 | russia russia - journalists - prize - russia - got right | 12 | 600_russia russia_journalists_prize_russia | | 601 | bret - bret michaels - bretbaier - specialreport - michaels | 12 | 601_bret_bret michaels_bretbaier_specialreport | | 602 | sweepstweet - teresagiudice - chi - lisalampanelli - client | 12 | 602_sweepstweet_teresagiudice_chi_lisalampanelli | | 603 | destruction realdonaldtrump - usa going - weneedyou - way save - save | 12 | 603_destruction realdonaldtrump_usa going_weneedyou_way save | | 604 | wig - wear - know dont - know dont like - amp haters | 12 | 604_wig_wear_know dont_know dont like | | 605 | momentum - momentum momentum - momentum momentum lot great ideas - momentum momentum lot - momentum momentum lot great | 12 | 605_momentum_momentum momentum_momentum momentum lot great ideas_momentum momentum lot | | 606 | commercials - phony tv - commercial - tv ads - ads | 12 | 606_commercials_phony tv_commercial_tv ads | | 607 | phones - tapping - tapping phones - tapped - prior election | 12 | 607_phones_tapping_tapping phones_tapped | | 608 | ivankatrump - rt ivankatrump - doralresort - rt ivankatrump great - ivankatrump great | 12 | 608_ivankatrump_rt ivankatrump_doralresort_rt ivankatrump great | | 609 | liberty university - libertyu - jerry falwell - convocation - falwell | 12 | 609_liberty university_libertyu_jerry falwell_convocation | | 610 | voter id - id - identification - voter - card | 12 | 610_voter id_id_identification_voter | | 611 | realdonaldtrump opportunity - thank president - realdonaldtrump invite - force rt - export facility | 11 | 611_realdonaldtrump opportunity_thank president_realdonaldtrump invite_force rt | | 612 | breitbartnews - rt breitbartnews - ouch - ht rt breitbartnews - angel families | 11 | 612_breitbartnews_rt breitbartnews_ouch_ht rt breitbartnews | | 613 | chicago - killings - shootings - chicago police - federal help | 11 | 613_chicago_killings_shootings_chicago police | | 614 | california - vote trump - high crime high - hell vote - hell vote trump | 11 | 614_california_vote trump_high crime high_hell vote | | 615 | trumpscotland - justinrose99 - graemereid1984 trumpgolflinks - realdonaldtrump friend - trumpscotland thank | 11 | 615_trumpscotland_justinrose99_graemereid1984 trumpgolflinks_realdonaldtrump friend | | 616 | jackshallis - donald trump president realdonaldtrump - trump president realdonaldtrump - donald trump president - jackshallis realdonaldtrump | 11 | 616_jackshallis_donald trump president realdonaldtrump_trump president realdonaldtrump_donald trump president | | 617 | bruce - ohr - bruce ohr - nelly - ohrs | 11 | 617_bruce_ohr_bruce ohr_nelly | | 618 | median - median household income - median household - household - household income | 11 | 618_median_median household income_median household_household | | 619 | kentucky - mitch - mitch mcconnell - mcconnell - senate majority leader | 11 | 619_kentucky_mitch_mitch mcconnell_mcconnell | | 620 | deaths - cases - death rate - coronavirus - strongly trending | 11 | 620_deaths_cases_death rate_coronavirus | | 621 | zimmerman - trial - trayvon - george - angel | 11 | 621_zimmerman_trial_trayvon_george | | 622 | meddling - thought crooked hillary - thought crooked - russian meddling - crooked hillary going win | 11 | 622_meddling_thought crooked hillary_thought crooked_russian meddling | | 623 | eddie - navy - gallagher - seal - navy seal | 11 | 623_eddie_navy_gallagher_seal | | 624 | drug - drug costs - drug prices - prescription - prescription drug | 11 | 624_drug_drug costs_drug prices_prescription | | 625 | davos - davos switzerland - switzerland - economic forum - world economic forum | 11 | 625_davos_davos switzerland_switzerland_economic forum | | 626 | veto - national monuments - termination - section 230 - 230 | 11 | 626_veto_national monuments_termination_section 230 | | 627 | stimulus - fault - payments - people desperately - people isnt | 10 | 627_stimulus_fault_payments_people desperately | | 628 | congratulations new - secretary - milley - hr mcmaster - general hr | 10 | 628_congratulations new_secretary_milley_hr mcmaster | | 629 | great interview - interview - ledger - marthamaccallum - lesley stahl 60minutes | 10 | 629_great interview_interview_ledger_marthamaccallum | | 630 | praying - pray - win new york - america loves trump - york thank | 10 | 630_praying_pray_win new york_america loves trump | | 631 | yovanovitch - ambassador - ambassador sondland - sondland - did tell | 10 | 631_yovanovitch_ambassador_ambassador sondland_sondland | | 632 | cruz - bobvanderplaats - total phony - hotel rooms - ted cruz | 10 | 632_cruz_bobvanderplaats_total phony_hotel rooms | | 633 | good night - sleep - mind - today great day - win good | 10 | 633_good night_sleep_mind_today great day | | 634 | vaccine - coronavirus vaccine - nih - johnson - covid19 | 10 | 634_vaccine_coronavirus vaccine_nih_johnson | | 635 | think big - thinking - thinking big - life think - life think big | 10 | 635_think big_thinking_thinking big_life think | | 636 | jimmy fallon - fallon - jimmy - late night jimmy - night jimmy | 10 | 636_jimmy fallon_fallon_jimmy_late night jimmy | | 637 | daytime - weekend daytime - foxnews daytime - weekend - amp newsmax | 10 | 637_daytime_weekend daytime_foxnews daytime_weekend | | 638 | clewandowski - danscavino - rt danscavino - ya know - rt danscavino realdonaldtrump stops | 10 | 638_clewandowski_danscavino_rt danscavino_ya know | | 639 | bought - realdonaldtrump bought - make rich - edincamera2 alphatreblesix - edincamera2 | 10 | 639_bought_realdonaldtrump bought_make rich_edincamera2 alphatreblesix | | 640 | losing battle new way win - battle new - battle new way - battle new way win - battle new way win war | 10 | 640_losing battle new way win_battle new_battle new way_battle new way win | | 641 | coronavirus - paid sick - paid sick leave - sick leave - families coronavirus response act | 10 | 641_coronavirus_paid sick_paid sick leave_sick leave | | 642 | dubai - golf club dubai - club dubai - tigerwoods - course dubai | 10 | 642_dubai_golf club dubai_club dubai_tigerwoods | | 643 | pastor - brunson - andrew brunson - pastor andrew - pastor andrew brunson | 10 | 643_pastor_brunson_andrew brunson_pastor andrew | | 644 | obama better - stay offense - days election - offense - debate | 10 | 644_obama better_stay offense_days election_offense | | 645 | dark knight - dark knight rises - knight rises - knight - dark | 10 | 645_dark knight_dark knight rises_knight rises_knight | </details> ## Training hyperparameters * calculate_probabilities: False * language: english * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.6 * Pandas: 2.0.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.7.0 * Transformers: 4.41.0 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
{"library_name": "bertopic", "pipeline_tag": "text-classification", "tags": ["bertopic"]}
dataset
null
554
unsloth/Phi-4-mini-instruct
unsloth
text-generation
[ "transformers", "safetensors", "phi3", "text-generation", "phi", "phi4", "unsloth", "nlp", "code", "microsoft", "math", "chat", "conversational", "custom_code", "multilingual", "base_model:microsoft/Phi-4-mini-instruct", "base_model:finetune:microsoft/Phi-4-mini-instruct", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2025-02-27T01:16:07Z
2025-03-03T00:36:25+00:00
4,618
11
--- base_model: microsoft/Phi-4-mini-instruct language: - multilingual library_name: transformers license: mit license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/LICENSE pipeline_tag: text-generation tags: - phi - phi4 - unsloth - nlp - code - microsoft - math - chat - conversational --- <div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>This is Phi-4-mini-instruct with our BUG FIXES. <br> See <a href="https://huggingface.co/collections/unsloth/phi-4-all-versions-677eecf93784e61afe762afa">our collection</a> for versions of Phi-4 with our bug fixes including GGUF & 4-bit formats.</strong> </p> <p style="margin-bottom: 0;"> <em>Unsloth's Phi-4 <a href="https://unsloth.ai/blog/dynamic-4bit">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em> </p> <div style="display: flex; gap: 5px; align-items: center; "> <a href="https://github.com/unslothai/unsloth/"> <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133"> </a> <a href="https://discord.gg/unsloth"> <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173"> </a> <a href="https://docs.unsloth.ai/"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143"> </a> </div> <h1 style="margin-top: 0rem;">Finetune your own Reasoning model like R1 with Unsloth!</h2> </div> We have a free Google Colab notebook for turning Phi-4 into a reasoning model: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb ### Unsloth bug fixes: 1. Padding and EOS tokens are the same - fixed this. 2. Chat template had extra EOS token - removed this. Otherwise you will be <|end|> during inference. 3. EOS token should be <|end|> not <|endoftext|>. Otherwise it'll terminate at <|endoftext|> 4. Changed unk_token to � from EOS. ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **GRPO with Phi-4** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) | 2x faster | 80% less | | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less | | **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less | | **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai) - This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Model Summary Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures. 📰 [Phi-4-mini Microsoft Blog](https://aka.ms/phi4-feb2025) <br> 📖 [Phi-4-mini Technical Report](https://aka.ms/phi-4-multimodal/techreport) <br> 👩‍🍳 [Phi Cookbook](https://github.com/microsoft/PhiCookBook) <br> 🏡 [Phi Portal](https://azure.microsoft.com/en-us/products/phi) <br> 🖥️ Try It [Azure](https://aka.ms/phi-4-mini/azure), [Huggingface](https://huggingface.co/spaces/microsoft/phi-4-mini) <br> **Phi-4**: [[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)]; [multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct); ## Intended Uses ### Primary Use Cases The model is intended for broad multilingual commercial and research use. The model provides uses for general purpose AI systems and applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially math and logic). The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. ### Use Case Considerations The model is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes This release of Phi-4-mini-instruct is based on valuable user feedback from the Phi-3 series. The Phi-4-mini model employed new architecture for efficiency, larger vocabulary for multilingual support, and better post-training techniques were used for instruction following, function calling, as well as additional data leading to substantial gains on key capabilities. It is anticipated that most use cases will benefit from this release, but users are encouraged to test in their particular AI applications. The enthusiastic support for the Phi-4 series is greatly appreciated. Feedback on Phi-4-mini-instruct is welcomed and crucial to the model’s evolution and improvement. ### Model Quality To understand the capabilities, the 3.8B parameters Phi-4-mini-instruct model was compared with a set of models over a variety of benchmarks using an internal benchmark platform (See Appendix A for benchmark methodology). A high-level overview of the model quality is as follows: | Benchmark | Similar size | | | | |2x size | | | | | | |----------------------------------|-------------|-------------------|-------------------|-------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | | Phi-4 mini-Ins | Phi-3.5-mini-Ins | Llama-3.2-3B-Ins | Mistral-3B | Qwen2.5-3B-Ins | Qwen2.5-7B-Ins | Mistral-8B-2410 | Llama-3.1-8B-Ins | Llama-3.1-Tulu-3-8B | Gemma2-9B-Ins | GPT-4o-mini-2024-07-18 | | **Popular aggregated benchmark** | | | | | | | | | | | | | Arena Hard | 32.8 | 34.4 | 17.0 | 26.9 | 32.0 | 55.5 | 37.3 | 25.7 | 42.7 | 43.7 | 53.7 | | BigBench Hard (0-shot, CoT) | 70.4 | 63.1 | 55.4 | 51.2 | 56.2 | 72.4 | 53.3 | 63.4 | 55.5 | 65.7 | 80.4 | | MMLU (5-shot) | 67.3 | 65.5 | 61.8 | 60.8 | 65.0 | 72.6 | 63.0 | 68.1 | 65.0 | 71.3 | 77.2 | | MMLU-Pro (0-shot, CoT) | 52.8 | 47.4 | 39.2 | 35.3 | 44.7 | 56.2 | 36.6 | 44.0 | 40.9 | 50.1 | 62.8 | | **Reasoning** | | | | | | | | | | | | | ARC Challenge (10-shot) | 83.7 | 84.6 | 76.1 | 80.3 | 82.6 | 90.1 | 82.7 | 83.1 | 79.4 | 89.8 | 93.5 | | BoolQ (2-shot) | 81.2 | 77.7 | 71.4 | 79.4 | 65.4 | 80.0 | 80.5 | 82.8 | 79.3 | 85.7 | 88.7 | | GPQA (0-shot, CoT) | 25.2 | 26.6 | 24.3 | 24.4 | 23.4 | 30.6 | 26.3 | 26.3 | 29.9 | 39.1 | 41.1 | | HellaSwag (5-shot) | 69.1 | 72.2 | 77.2 | 74.6 | 74.6 | 80.0 | 73.5 | 72.8 | 80.9 | 87.1 | 88.7 | | OpenBookQA (10-shot) | 79.2 | 81.2 | 72.6 | 79.8 | 79.3 | 82.6 | 80.2 | 84.8 | 79.8 | 90.0 | 90.0 | | PIQA (5-shot) | 77.6 | 78.2 | 68.2 | 73.2 | 72.6 | 76.2 | 81.2 | 83.2 | 78.3 | 83.7 | 88.7 | | Social IQA (5-shot) | 72.5 | 75.1 | 68.3 | 73.9 | 75.3 | 75.3 | 77.6 | 71.8 | 73.4 | 74.7 | 82.9 | | TruthfulQA (MC2) (10-shot) | 66.4 | 65.2 | 59.2 | 62.9 | 64.3 | 69.4 | 63.0 | 69.2 | 64.1 | 76.6 | 78.2 | | Winogrande (5-shot) | 67.0 | 72.2 | 53.2 | 59.8 | 63.3 | 71.1 | 63.1 | 64.7 | 65.4 | 74.0 | 76.9 | | **Multilingual** | | | | | | | | | | | | | Multilingual MMLU (5-shot) | 49.3 | 51.8 | 48.1 | 46.4 | 55.9 | 64.4 | 53.7 | 56.2 | 54.5 | 63.8 | 72.9 | | MGSM (0-shot, CoT) | 63.9 | 49.6 | 44.6 | 44.6 | 53.5 | 64.5 | 56.7 | 56.7 | 58.6 | 75.1 | 81.7 | | **Math** | | | | | | | | | | | | | GSM8K (8-shot, CoT) | 88.6 | 76.9 | 75.6 | 80.1 | 80.6 | 88.7 | 81.9 | 82.4 | 84.3 | 84.9 | 91.3 | | MATH (0-shot, CoT) | 64.0 | 49.8 | 46.7 | 41.8 | 61.7 | 60.4 | 41.6 | 47.6 | 46.1 | 51.3 | 70.2 | | **Overall** | **63.5** | **60.5** | **56.2** | **56.9** | **60.1** | **67.9** | **60.2** | **62.3** | **60.9** | **65.0** | **75.5** | Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, it may be possible to resolve such weakness by augmenting Phi-4 with a search engine, particularly when using the model under RAG settings. ## Usage ### Tokenizer Phi-4-mini-instruct supports a vocabulary size of up to `200064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-4-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Input Formats Given the nature of the training data, the Phi-4-mini-instruct model is best suited for prompts using specific formats. Below are the two primary formats: #### Chat format This format is used for general conversation and instructions: ```yaml <|system|>Insert System Message<|end|><|user|>Insert User Message<|end|><|assistant|> ``` #### Tool-enabled function-calling format This format is used when the user wants the model to provide function calls based on the given tools. The user should provide the available tools in the system prompt, wrapped by <|tool|> and <|/tool|> tokens. The tools should be specified in JSON format, using a JSON dump structure. Example: ` <|system|>You are a helpful assistant with some tools.<|tool|>[{"name": "get_weather_updates", "description": "Fetches weather updates for a given city using the RapidAPI Weather API.", "parameters": {"city": {"description": "The name of the city for which to retrieve weather information.", "type": "str", "default": "London"}}}]<|/tool|><|end|><|user|>What is the weather like in Paris today?<|end|><|assistant|> ` ### Inference with vLLM #### Requirements List of required packages: ``` flash_attn==2.7.4.post1 torch==2.6.0 vllm>=0.7.2 ``` #### Example To perform inference using vLLM, you can use the following code snippet: ```python from vllm import LLM, SamplingParams llm = LLM(model="microsoft/Phi-4-mini-instruct", trust_remote_code=True) messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] sampling_params = SamplingParams( max_tokens=500, temperature=0.0, ) output = llm.chat(messages=messages, sampling_params=sampling_params) print(output[0].outputs[0].text) ``` ### Inference with Transformers #### Requirements Phi-4 family has been integrated in the `4.49.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`. List of required packages: ``` flash_attn==2.7.4.post1 torch==2.6.0 transformers==4.49.0 accelerate==1.3.0 ``` Phi-4-mini-instruct is also available in [Azure AI Studio]() #### Example After obtaining the Phi-4-mini-instruct model checkpoints, users can use this sample code for inference. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_path = "microsoft/Phi-4-mini-instruct" model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` ## Responsible AI Considerations Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English. + Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 4 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: The majority of Phi 4 training data is based in Python and uses common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, it is strongly recommended that users manually verify all API uses. + Long Conversation: Phi 4 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift. Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi 4 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model + **Architecture:** Phi-4-mini-instruct has 3.8B parameters and is a dense decoder-only Transformer model. When compared with Phi-3.5-mini, the major changes with Phi-4-mini-instruct are 200K vocabulary, grouped-query attention, and shared input and output embedding.<br> + **Inputs:** Text. It is best suited for prompts using the chat format.<br> + **Context length:** 128K tokens<br> + **GPUs:** 512 A100-80G<br> + **Training time:** 21 days<br> + **Training data:** 5T tokens<br> + **Outputs:** Generated text in response to the input<br> + **Dates:** Trained between November and December 2024<br> + **Status:** This is a static model trained on offline datasets with the cutoff date of June 2024 for publicly available data.<br> + **Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br> + **Release date:** February 2025<br> ### Training Datasets Phi-4-mini’s training data includes a wide variety of sources, totaling 5 trillion tokens, and is a combination of 1) publicly available documents filtered for quality, selected high-quality educational data, and code 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (e.g., science, daily activities, theory of mind, etc.) 3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. Focus was placed on the quality of data that could potentially improve the reasoning ability for the model, and the publicly available documents were filtered to contain a preferred level of knowledge. As an example, the result of a game in premier league on a particular day might be good training data for frontier models, but such information was removed to leave more model capacity for reasoning for the model’s small size. More details about data can be found in the Phi-4-mini-instruct technical report. The decontamination process involved normalizing and tokenizing the dataset, then generating and comparing n-grams between the target dataset and benchmark datasets. Samples with matching n-grams above a threshold were flagged as contaminated and removed from the dataset. A detailed contamination report was generated, summarizing the matched text, matching ratio, and filtered results for further analysis. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/sample_finetune.py). ## Safety Evaluation and Red-Teaming Various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets were leveraged to evaluate Phi-4 models’ propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the Phi 3 Safety Post-Training paper had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Details on prior red team evaluations across Phi models can be found in the Phi 3 Safety Post-Training paper. For this release, the red team tested the model in English, Chinese, Japanese, Spanish, Portuguese, Arabic, Thai, and Russian for the following potential harms: Hate Speech and Bias, Violent Crimes, Specialized Advice, and Election Information. Their findings indicate that the model is resistant to jailbreak techniques across languages, but that language-specific attack prompts leveraging cultural context can cause the model to output harmful content. Another insight was that with function calling scenarios, the model could sometimes hallucinate function names or URL’s. The model may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken. ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-4-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. ## Appendix A: Benchmark Methodology We include a brief word on methodology here - and in particular, how we think about optimizing prompts. In an ideal world, we would never change any prompts in our benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is our default approach, and is the case in the vast majority of models we have run to date. There are, however, some exceptions to this. In some cases, we see a model that performs worse than expected on a given eval due to a failure to respect the output format. For example: + A model may refuse to answer questions (for no apparent reason), or in coding tasks models may prefix their response with “Sure, I can help with that. …” which may break the parser. In such cases, we have opted to try different system messages (e.g. “You must always respond to a question” or “Get to the point!”). + With some models, we observed that few shots actually hurt model performance. In this case we did allow running the benchmarks with 0-shots for all cases. + We have tools to convert between chat and completions APIs. When converting a chat prompt to a completion prompt, some models have different keywords e.g. Human vs User. In these cases, we do allow for model-specific mappings for chat to completion prompts. However, we do not: + Pick different few-shot examples. Few shots will always be the same when comparing different models. + Change prompt format: e.g. if it is an A/B/C/D multiple choice, we do not tweak this to 1/2/3/4 multiple choice. ### Benchmark datasets The model was evaluated across a breadth of public and internal benchmarks to understand the model’s capabilities under multiple tasks and conditions. While most evaluations use English, the leading multilingual benchmark was incorporated that covers performance in select languages. More specifically, + Reasoning: + Winogrande: commonsense reasoning around pronoun resolution + PIQA: physical commonsense reasoning around everyday situations + ARC-challenge: grade-school multiple choice science questions + GPQA: very hard questions written and validated by experts in biology, physics, and chemistry + MedQA: medical questions answering + Social IQA: social commonsense intelligence + BoolQ: natural questions from context + TruthfulQA: grounded reasoning + Language understanding: + HellaSwag: commonsense natural language inference around everyday events + ANLI: adversarial natural language inference + Function calling: + Berkeley function calling function and tool call + Internal function calling benchmarks + World knowledge: + TriviaQA: trivia question on general topics + Math: + GSM8K: grade-school math word problems + GSM8K Hard: grade-school math word problems with large values and some absurdity. + MATH: challenging competition math problems + Code: + HumanEval HumanEval+, MBPP, MBPP+: python coding tasks + LiveCodeBenh, LiveBench: contamination-free code tasks + BigCode Bench: challenging programming tasks + Spider: SQL query tasks + Internal coding benchmarks + Instructions following: + IFEval: verifiable instructions + Internal instructions following benchmarks + Multilingual: + MGSM: multilingual grade-school math + Multilingual MMLU and MMLU-pro + MEGA: multilingual NLP tasks + Popular aggregated datasets: MMLU, MMLU-pro, BigBench-Hard, AGI Eval + Multi-turn conversations: + Data generated by in-house adversarial conversation simulation tool + Single-turn trustworthiness evaluation: + DecodingTrust: a collection of trustworthiness benchmarks in eight different perspectives + XSTest: exaggerated safety evaluation + Toxigen: adversarial and hate speech detection + Red Team: + Responses to prompts provided by AI Red Team at Microsoft
[ "MEDQA" ]
Non_BioNLP
<div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>This is Phi-4-mini-instruct with our BUG FIXES. <br> See <a href="https://huggingface.co/collections/unsloth/phi-4-all-versions-677eecf93784e61afe762afa">our collection</a> for versions of Phi-4 with our bug fixes including GGUF & 4-bit formats.</strong> </p> <p style="margin-bottom: 0;"> <em>Unsloth's Phi-4 <a href="https://unsloth.ai/blog/dynamic-4bit">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em> </p> <div style="display: flex; gap: 5px; align-items: center; "> <a href="https://github.com/unslothai/unsloth/"> <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133"> </a> <a href="https://discord.gg/unsloth"> <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173"> </a> <a href="https://docs.unsloth.ai/"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143"> </a> </div> <h1 style="margin-top: 0rem;">Finetune your own Reasoning model like R1 with Unsloth!</h2> </div> We have a free Google Colab notebook for turning Phi-4 into a reasoning model: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb ### Unsloth bug fixes: 1. Padding and EOS tokens are the same - fixed this. 2. Chat template had extra EOS token - removed this. Otherwise you will be <|end|> during inference. 3. EOS token should be <|end|> not <|endoftext|>. Otherwise it'll terminate at <|endoftext|> 4. Changed unk_token to � from EOS. ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **GRPO with Phi-4** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) | 2x faster | 80% less | | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less | | **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less | | **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai) - This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Model Summary Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures. 📰 [Phi-4-mini Microsoft Blog](https://aka.ms/phi4-feb2025) <br> 📖 [Phi-4-mini Technical Report](https://aka.ms/phi-4-multimodal/techreport) <br> 👩‍🍳 [Phi Cookbook](https://github.com/microsoft/PhiCookBook) <br> 🏡 [Phi Portal](https://azure.microsoft.com/en-us/products/phi) <br> 🖥️ Try It [Azure](https://aka.ms/phi-4-mini/azure), [Huggingface](https://huggingface.co/spaces/microsoft/phi-4-mini) <br> **Phi-4**: [[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)]; [multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct); ## Intended Uses ### Primary Use Cases The model is intended for broad multilingual commercial and research use. The model provides uses for general purpose AI systems and applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially math and logic). The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. ### Use Case Considerations The model is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes This release of Phi-4-mini-instruct is based on valuable user feedback from the Phi-3 series. The Phi-4-mini model employed new architecture for efficiency, larger vocabulary for multilingual support, and better post-training techniques were used for instruction following, function calling, as well as additional data leading to substantial gains on key capabilities. It is anticipated that most use cases will benefit from this release, but users are encouraged to test in their particular AI applications. The enthusiastic support for the Phi-4 series is greatly appreciated. Feedback on Phi-4-mini-instruct is welcomed and crucial to the model’s evolution and improvement. ### Model Quality To understand the capabilities, the 3.8B parameters Phi-4-mini-instruct model was compared with a set of models over a variety of benchmarks using an internal benchmark platform (See Appendix A for benchmark methodology). A high-level overview of the model quality is as follows: | Benchmark | Similar size | | | | |2x size | | | | | | |----------------------------------|-------------|-------------------|-------------------|-------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | | Phi-4 mini-Ins | Phi-3.5-mini-Ins | Llama-3.2-3B-Ins | Mistral-3B | Qwen2.5-3B-Ins | Qwen2.5-7B-Ins | Mistral-8B-2410 | Llama-3.1-8B-Ins | Llama-3.1-Tulu-3-8B | Gemma2-9B-Ins | GPT-4o-mini-2024-07-18 | | **Popular aggregated benchmark** | | | | | | | | | | | | | Arena Hard | 32.8 | 34.4 | 17.0 | 26.9 | 32.0 | 55.5 | 37.3 | 25.7 | 42.7 | 43.7 | 53.7 | | BigBench Hard (0-shot, CoT) | 70.4 | 63.1 | 55.4 | 51.2 | 56.2 | 72.4 | 53.3 | 63.4 | 55.5 | 65.7 | 80.4 | | MMLU (5-shot) | 67.3 | 65.5 | 61.8 | 60.8 | 65.0 | 72.6 | 63.0 | 68.1 | 65.0 | 71.3 | 77.2 | | MMLU-Pro (0-shot, CoT) | 52.8 | 47.4 | 39.2 | 35.3 | 44.7 | 56.2 | 36.6 | 44.0 | 40.9 | 50.1 | 62.8 | | **Reasoning** | | | | | | | | | | | | | ARC Challenge (10-shot) | 83.7 | 84.6 | 76.1 | 80.3 | 82.6 | 90.1 | 82.7 | 83.1 | 79.4 | 89.8 | 93.5 | | BoolQ (2-shot) | 81.2 | 77.7 | 71.4 | 79.4 | 65.4 | 80.0 | 80.5 | 82.8 | 79.3 | 85.7 | 88.7 | | GPQA (0-shot, CoT) | 25.2 | 26.6 | 24.3 | 24.4 | 23.4 | 30.6 | 26.3 | 26.3 | 29.9 | 39.1 | 41.1 | | HellaSwag (5-shot) | 69.1 | 72.2 | 77.2 | 74.6 | 74.6 | 80.0 | 73.5 | 72.8 | 80.9 | 87.1 | 88.7 | | OpenBookQA (10-shot) | 79.2 | 81.2 | 72.6 | 79.8 | 79.3 | 82.6 | 80.2 | 84.8 | 79.8 | 90.0 | 90.0 | | PIQA (5-shot) | 77.6 | 78.2 | 68.2 | 73.2 | 72.6 | 76.2 | 81.2 | 83.2 | 78.3 | 83.7 | 88.7 | | Social IQA (5-shot) | 72.5 | 75.1 | 68.3 | 73.9 | 75.3 | 75.3 | 77.6 | 71.8 | 73.4 | 74.7 | 82.9 | | TruthfulQA (MC2) (10-shot) | 66.4 | 65.2 | 59.2 | 62.9 | 64.3 | 69.4 | 63.0 | 69.2 | 64.1 | 76.6 | 78.2 | | Winogrande (5-shot) | 67.0 | 72.2 | 53.2 | 59.8 | 63.3 | 71.1 | 63.1 | 64.7 | 65.4 | 74.0 | 76.9 | | **Multilingual** | | | | | | | | | | | | | Multilingual MMLU (5-shot) | 49.3 | 51.8 | 48.1 | 46.4 | 55.9 | 64.4 | 53.7 | 56.2 | 54.5 | 63.8 | 72.9 | | MGSM (0-shot, CoT) | 63.9 | 49.6 | 44.6 | 44.6 | 53.5 | 64.5 | 56.7 | 56.7 | 58.6 | 75.1 | 81.7 | | **Math** | | | | | | | | | | | | | GSM8K (8-shot, CoT) | 88.6 | 76.9 | 75.6 | 80.1 | 80.6 | 88.7 | 81.9 | 82.4 | 84.3 | 84.9 | 91.3 | | MATH (0-shot, CoT) | 64.0 | 49.8 | 46.7 | 41.8 | 61.7 | 60.4 | 41.6 | 47.6 | 46.1 | 51.3 | 70.2 | | **Overall** | **63.5** | **60.5** | **56.2** | **56.9** | **60.1** | **67.9** | **60.2** | **62.3** | **60.9** | **65.0** | **75.5** | Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, it may be possible to resolve such weakness by augmenting Phi-4 with a search engine, particularly when using the model under RAG settings. ## Usage ### Tokenizer Phi-4-mini-instruct supports a vocabulary size of up to `200064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-4-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Input Formats Given the nature of the training data, the Phi-4-mini-instruct model is best suited for prompts using specific formats. Below are the two primary formats: #### Chat format This format is used for general conversation and instructions: ```yaml <|system|>Insert System Message<|end|><|user|>Insert User Message<|end|><|assistant|> ``` #### Tool-enabled function-calling format This format is used when the user wants the model to provide function calls based on the given tools. The user should provide the available tools in the system prompt, wrapped by <|tool|> and <|/tool|> tokens. The tools should be specified in JSON format, using a JSON dump structure. Example: ` <|system|>You are a helpful assistant with some tools.<|tool|>[{"name": "get_weather_updates", "description": "Fetches weather updates for a given city using the RapidAPI Weather API.", "parameters": {"city": {"description": "The name of the city for which to retrieve weather information.", "type": "str", "default": "London"}}}]<|/tool|><|end|><|user|>What is the weather like in Paris today?<|end|><|assistant|> ` ### Inference with vLLM #### Requirements List of required packages: ``` flash_attn==2.7.4.post1 torch==2.6.0 vllm>=0.7.2 ``` #### Example To perform inference using vLLM, you can use the following code snippet: ```python from vllm import LLM, SamplingParams llm = LLM(model="microsoft/Phi-4-mini-instruct", trust_remote_code=True) messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] sampling_params = SamplingParams( max_tokens=500, temperature=0.0, ) output = llm.chat(messages=messages, sampling_params=sampling_params) print(output[0].outputs[0].text) ``` ### Inference with Transformers #### Requirements Phi-4 family has been integrated in the `4.49.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`. List of required packages: ``` flash_attn==2.7.4.post1 torch==2.6.0 transformers==4.49.0 accelerate==1.3.0 ``` Phi-4-mini-instruct is also available in [Azure AI Studio]() #### Example After obtaining the Phi-4-mini-instruct model checkpoints, users can use this sample code for inference. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_path = "microsoft/Phi-4-mini-instruct" model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` ## Responsible AI Considerations Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English. + Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 4 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: The majority of Phi 4 training data is based in Python and uses common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, it is strongly recommended that users manually verify all API uses. + Long Conversation: Phi 4 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift. Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi 4 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model + **Architecture:** Phi-4-mini-instruct has 3.8B parameters and is a dense decoder-only Transformer model. When compared with Phi-3.5-mini, the major changes with Phi-4-mini-instruct are 200K vocabulary, grouped-query attention, and shared input and output embedding.<br> + **Inputs:** Text. It is best suited for prompts using the chat format.<br> + **Context length:** 128K tokens<br> + **GPUs:** 512 A100-80G<br> + **Training time:** 21 days<br> + **Training data:** 5T tokens<br> + **Outputs:** Generated text in response to the input<br> + **Dates:** Trained between November and December 2024<br> + **Status:** This is a static model trained on offline datasets with the cutoff date of June 2024 for publicly available data.<br> + **Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br> + **Release date:** February 2025<br> ### Training Datasets Phi-4-mini’s training data includes a wide variety of sources, totaling 5 trillion tokens, and is a combination of 1) publicly available documents filtered for quality, selected high-quality educational data, and code 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (e.g., science, daily activities, theory of mind, etc.) 3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. Focus was placed on the quality of data that could potentially improve the reasoning ability for the model, and the publicly available documents were filtered to contain a preferred level of knowledge. As an example, the result of a game in premier league on a particular day might be good training data for frontier models, but such information was removed to leave more model capacity for reasoning for the model’s small size. More details about data can be found in the Phi-4-mini-instruct technical report. The decontamination process involved normalizing and tokenizing the dataset, then generating and comparing n-grams between the target dataset and benchmark datasets. Samples with matching n-grams above a threshold were flagged as contaminated and removed from the dataset. A detailed contamination report was generated, summarizing the matched text, matching ratio, and filtered results for further analysis. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/sample_finetune.py). ## Safety Evaluation and Red-Teaming Various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets were leveraged to evaluate Phi-4 models’ propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the Phi 3 Safety Post-Training paper had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Details on prior red team evaluations across Phi models can be found in the Phi 3 Safety Post-Training paper. For this release, the red team tested the model in English, Chinese, Japanese, Spanish, Portuguese, Arabic, Thai, and Russian for the following potential harms: Hate Speech and Bias, Violent Crimes, Specialized Advice, and Election Information. Their findings indicate that the model is resistant to jailbreak techniques across languages, but that language-specific attack prompts leveraging cultural context can cause the model to output harmful content. Another insight was that with function calling scenarios, the model could sometimes hallucinate function names or URL’s. The model may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken. ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-4-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. ## Appendix A: Benchmark Methodology We include a brief word on methodology here - and in particular, how we think about optimizing prompts. In an ideal world, we would never change any prompts in our benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is our default approach, and is the case in the vast majority of models we have run to date. There are, however, some exceptions to this. In some cases, we see a model that performs worse than expected on a given eval due to a failure to respect the output format. For example: + A model may refuse to answer questions (for no apparent reason), or in coding tasks models may prefix their response with “Sure, I can help with that. …” which may break the parser. In such cases, we have opted to try different system messages (e.g. “You must always respond to a question” or “Get to the point!”). + With some models, we observed that few shots actually hurt model performance. In this case we did allow running the benchmarks with 0-shots for all cases. + We have tools to convert between chat and completions APIs. When converting a chat prompt to a completion prompt, some models have different keywords e.g. Human vs User. In these cases, we do allow for model-specific mappings for chat to completion prompts. However, we do not: + Pick different few-shot examples. Few shots will always be the same when comparing different models. + Change prompt format: e.g. if it is an A/B/C/D multiple choice, we do not tweak this to 1/2/3/4 multiple choice. ### Benchmark datasets The model was evaluated across a breadth of public and internal benchmarks to understand the model’s capabilities under multiple tasks and conditions. While most evaluations use English, the leading multilingual benchmark was incorporated that covers performance in select languages. More specifically, + Reasoning: + Winogrande: commonsense reasoning around pronoun resolution + PIQA: physical commonsense reasoning around everyday situations + ARC-challenge: grade-school multiple choice science questions + GPQA: very hard questions written and validated by experts in biology, physics, and chemistry + MedQA: medical questions answering + Social IQA: social commonsense intelligence + BoolQ: natural questions from context + TruthfulQA: grounded reasoning + Language understanding: + HellaSwag: commonsense natural language inference around everyday events + ANLI: adversarial natural language inference + Function calling: + Berkeley function calling function and tool call + Internal function calling benchmarks + World knowledge: + TriviaQA: trivia question on general topics + Math: + GSM8K: grade-school math word problems + GSM8K Hard: grade-school math word problems with large values and some absurdity. + MATH: challenging competition math problems + Code: + HumanEval HumanEval+, MBPP, MBPP+: python coding tasks + LiveCodeBenh, LiveBench: contamination-free code tasks + BigCode Bench: challenging programming tasks + Spider: SQL query tasks + Internal coding benchmarks + Instructions following: + IFEval: verifiable instructions + Internal instructions following benchmarks + Multilingual: + MGSM: multilingual grade-school math + Multilingual MMLU and MMLU-pro + MEGA: multilingual NLP tasks + Popular aggregated datasets: MMLU, MMLU-pro, BigBench-Hard, AGI Eval + Multi-turn conversations: + Data generated by in-house adversarial conversation simulation tool + Single-turn trustworthiness evaluation: + DecodingTrust: a collection of trustworthiness benchmarks in eight different perspectives + XSTest: exaggerated safety evaluation + Toxigen: adversarial and hate speech detection + Red Team: + Responses to prompts provided by AI Red Team at Microsoft
{"base_model": "microsoft/Phi-4-mini-instruct", "language": ["multilingual"], "library_name": "transformers", "license": "mit", "license_link": "https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "tags": ["phi", "phi4", "unsloth", "nlp", "code", "microsoft", "math", "chat", "conversational"]}
dataset
null
555
legalvn/paraphrase-multilingual-MiniLM-L12-v2-166000
legalvn
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:651725", "loss:SoftmaxLoss", "arxiv:1908.10084", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-12-04T09:45:52Z
2024-12-04T09:46:05+00:00
10
0
--- base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:651725 - loss:SoftmaxLoss widget: - source_sentence: Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào? sentences: - Chu kỳ kiểm định chất lượng giáo dục nghề nghiệp\n...\n2. Trường hợp cơ sở giáo dục nghề nghiệp có ngành, nghề trọng điểm; chương trình đào tạo ngành, nghề trọng điểm; cơ sở giáo dục nghề nghiệp và chương trình đào tạo các ngành, nghề phục vụ yêu cầu công tác quản lý nhà nước phải thực hiện kiểm định chất lượng giáo dục nghề nghiệp theo quy định tại điểm d khoản 3 Điều 65 của Luật Giáo dục nghề nghiệp số 74/2014/QH13 ngày 27 tháng 11 năm 2014 nhưng không đạt tiêu chuẩn kiểm định chất lượng giáo dục nghề nghiệp thì trong thời hạn 03 năm phải thực hiện kiểm định lại. - Vệ sinh môi trường, vệ sinh tòa nhà\n1. Trách nhiệm của các đơn vị, cán bộ, công chức, viên chức, nhân viên và người lao động trong việc giữ gìn vệ sinh tại nơi làm việc và khu vực công cộng:\na) Hàng ngày tự vệ sinh sàn nhà, bàn ghế, tủ, các thiết bị được trang cấp và tổng vệ sinh phòng làm việc vào chiều thứ Sáu hàng tuần;\nb) Có trách nhiệm thu gom rác thải trong phòng chuyển ra thùng rác đặt tại các hành lang;\nc) Không đổ nước chè, cà phê, ….. xuống sàn nhà, hành lang, tường nhà và khu vệ sinh;\nd) Nghiêm cấp hút thuốc lá trong phòng làm việc, phòng họp, cầu thang máy, cầu thang bộ, tầng hầm;\nđ) Không khạc nhổ, bôi bẩn lên tường, không vứt rác thải, gạt tàn thuốc lá, đầu lọc thuốc lá xuống sàn nhà và các khu vực công cộng;\ne) Nghiêm cấm hái hoa, bẻ cành, dẫm lên thảm cỏ, nhổ cây trong khuôn viên cơ quan.\ng) Nghiêm cấm mang chất độc hại vào cơ quan.\n… - Nguyên tắc áp dụng\n1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một mức phụ cấp ưu đãi theo nghề cao nhất.\n2. Công chức, viên chức đã hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày 22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số 64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này. - source_sentence: Số lượng thành viên Hội đồng khoa học và đào tạo là bao nhiêu? sentences: - 'Cấp Giấy chứng nhận chất lượng an toàn kỹ thuật và bảo vệ môi trường trong sản xuất, lắp ráp ô tô, rơ moóc và sơ mi rơ moóc\n2.1. Trình tự thực hiện:\na) Nộp hồ sơ TTHC:\n- Cơ sở sản xuất lập hồ sơ kiểm tra xe cơ giới theo quy định và nộp đến Cục Đăng kiểm Việt Nam.\nb) Giải quyết TTHC:\n- Cục Đăng kiểm Việt Nam tiếp nhận và kiểm tra thành phần hồ sơ kiểm tra xe cơ giới: nếu hồ sơ không đầy đủ theo quy định thì hướng dẫn Cơ sở sản xuất hoàn thiện lại; Nếu hồ sơ đầy đủ theo quy định thì thống nhất về thời gian và địa điểm thực hiện đánh giá điều kiện kiểm tra chất lượng sản phẩm tại Cơ sở sản xuất;\n- Cục Đăng kiểm Việt Nam tiến hành kiểm tra nội dung hồ sơ và thực hiện đánh giá điều kiện kiểm tra chất lượng sản phẩm tại Cơ sở sản xuất theo quy định: Nếu chưa đạt yêu cầu thì thông báo để Cơ sở sản xuất hoàn thiện lại; Nếu đạt yêu cầu thì cấp Giấy chứng nhận trong thời hạn 03 ngày làm việc kể từ ngày kết thúc kiểm tra, đánh giá hồ sơ đầy đủ, hợp lệ theo quy định và có kết quả đánh giá COP đạt yêu cầu;\n- Cơ sở sản xuất nộp hồ sơ kiểm tra xe cơ giới và nhận kết quả trực tiếp tại trụ sở Cục Đăng kiểm Việt Nam hoặc qua hệ thống bưu chính hoặc qua hệ thống dịch vụ công trực tuyến hoặc qua hình thức phù hợp khác.\n...' - Phiên họp Hội đồng khoa học\n1. Hội đồng khoa học họp định kỳ 06 tháng/01 lần. Các phiên họp định kỳ phải có ít nhất 2/3 tổng số thành viên của Hội đồng khoa học tham dự.\n2. Phiên họp đột xuất của Hội đồng khoa học được triệu tập theo quyết định của Chủ tịch và phải có trên 1/2 số thành viên của Hội đồng khoa học tham dự.\n3. Viện trưởng VKSND tối cao tham dự phiên họp của Hội đồng khoa học khi thấy cần thiết.\n4. Tùy thuộc vào nội dung chương trình phiên họp, Chủ tịch Hội đồng khoa học có thể quyết định mời các nhà khoa học trong và ngoài ngành KSND tham gia phiên họp.\n5. Nội dung phiên họp, các tài liệu liên quan đến phiên họp của Hội đồng khoa học phải được thông báo hoặc chuyển cho các Thành viên chậm nhất là 3 ngày làm việc trước ngày họp, trừ trường hợp đột xuất.\n6. Hội đồng khoa học thảo luận dân chủ, tập thể, công khai, quyết định theo đa số về những vấn đề thuộc nội dung phiên họp và những vấn đề do Chủ tịch Hội đồng khoa học nêu ra hoặc do các Thành viên đề nghị và được Chủ tịch Hội đồng khoa học chấp thuận.\nChủ tịch Hội đồng khoa học chủ trì thảo luận và kết luận tại phiên họp. Đối với những vấn đề phức tạp còn nhiều ý kiến khác nhau, Hội đồng khoa học tiến hành biểu quyết. Những vấn đề được biểu quyết đạt trên 2/3 số phiếu của thành viên có mặt hoặc trên 50% tổng số thành viên Hội đồng được coi là ý kiến chính thức của Hội đồng khoa học. Các ý kiến khác được bảo lưu, ghi vào biên bản cuộc họp. - Hồ sơ, thủ tục công nhận liệt sĩ\n1. Người khi hy sinh đang thuộc quân đội, công an quản lý thì Bộ Quốc phòng, Bộ Công an chịu trách nhiệm:\na) Hướng dẫn về quy trình lập hồ sơ đề nghị công nhận liệt sĩ theo quy định.\nb) Có văn bản đề nghị kèm hồ sơ gửi Bộ Lao động - Thương binh và Xã hội thẩm định trong thời gian không quá 50 ngày kể từ ngày cơ quan, đơn vị trực tiếp quản lý người hy sinh xác lập, hoàn thiện các giấy tờ quy định tại Điều 17 Nghị định này. - source_sentence: Ban Tài chính Văn phòng Kiểm toán nhà nước thực hiện những chức năng gì? sentences: - 'Tiếp nhận hồ sơ và trả kết quả\n...\n2.2.4. Lao động nam hoặc người chồng của lao động nữ mang thai hộ nghỉ việc khi vợ sinh con: Bản sao giấy chứng sinh hoặc bản sao giấy khai sinh hoặc trích lục khai sinh của con; trường hợp sinh con phải phẫu thuật hoặc sinh con dưới 32 tuần tuổi mà giấy chứng sinh không thể hiện thì có thêm giấy tờ của cơ sở khám bệnh, chữa bệnh thể hiện việc sinh con phải phẫu thuật, sinh con dưới 32 tuần tuổi. Trường hợp con chết sau khi sinh mà chưa được cấp giấy chứng sinh thì thay bằng trích sao hoặc tóm tắt hồ sơ bệnh án hoặc giấy ra viện của người mẹ hoặc của lao động nữ mang thai hộ thể hiện con chết…' - Việc tự giám sát chất lượng dịch vụ viễn thông của doanh nghiệp viễn thông\n1. Các doanh nghiệp viễn thông được Bộ Thông tin và Truyền thông cấp giấy phép kinh doanh dịch vụ viễn thông phải thường xuyên tự giám sát chất lượng dịch vụ đối với tất cả các dịch vụ thuộc “Danh mục dịch vụ viễn thông bắt buộc quản lý chất lượng” mà mình cung cấp.\n2. Trong trường hợp dịch vụ mà mình cung cấp có sự cố thì doanh nghiệp viễn thông phải thực hiện báo cáo đột xuất như quy định tại Khoản 3 Điều 8 của Thông tư này. - Cục Quản lý, giám sát bảo hiểm; Cục Quản lý Công sản; Cục Quản lý Giá; Cục Quản lý Nợ và Tài chính đối ngoại; Cục Quản lý, giám sát Kế toán, Kiểm toán; Cục Quản lý Công sản; Cục Tài chính doanh nghiệp và Vụ Tài chính ngân hàng chủ trì phối hợp với Cục Tin học & Thống kê Tài chính xây dựng quy trình điện tử từng thủ tục hành chính theo phạm vi quản lý đối với danh mục thủ tục hành chính để thực hiện tích hợp trên Hệ thống thông tin Một cửa điện tử của Bộ Tài chính. - source_sentence: Điều kiện để Giám đốc Học viện An ninh nhân dân được thăng cấp bậc hàm trước thời hạn như thế nào? sentences: - Mức độ tự chủ và trách nhiệm\n- Có ý thức và tác phong nghề nghiệp đúng chuẩn mực, có năng lực thực hiện công việc được giao; phương pháp làm việc khoa học, biết phân tích và giải quyết các vấn đề mới về lĩnh vực chuyên môn nghề;\n- Gắn bó nghề nghiệp; nghiêm chỉnh chấp hành quy chế, quy định của cơ quan, doanh nghiệp, nơi đang công tác với ý thức tổ chức kỉ luật và tinh thần trách nhiệm cao trong công việc;\n- Lập được các biện pháp an toàn và đảm bảo an toàn, vệ sinh lao động trong quá trình làm việc; có ý thức trách nhiệm công dân, thái độ và đạo đức nghề nghiệp đúng đắn, sẵn sàng nhận nhiệm vụ; tự tin, cầu tiến trong công việc; hợp tác, thân thiện, khiêm tốn trong các mối quan hệ;\n- Tự chịu trách nhiệm về chất lượng đối với kết quả công việc, sản phẩm do mình đảm nhiệm theo các tiêu chuẩn và chịu một phần trách nhiệm đối với kết quả công việc, sản phẩm của tổ, nhóm; - Tổ chức bộ máy\n...\n5. Tổng cục Hải quan có thể biệt phái công chức từ các đơn vị thuộc và trực thuộc Tổng cục để bổ sung cán bộ chủ chốt, cán bộ kỹ thuật có năng lực, kinh nghiệm cho Ban Quản lý dự án đầu tư xây dựng chuyên ngành của Tổng cục Hải quan. Thời hạn biệt phái các công chức không quá 03 năm, trường hợp quá 03 năm mà chưa hoàn thành dự án thì Tổng cục Hải quan xem xét quyết định bổ sung thời gian biệt phái.\nNhân sự tuyển dụng mới của Ban Quản lý dự án đầu tư xây dựng chuyên ngành của Tổng cục Hải quan là viên chức hoặc hợp đồng lao động, thực hiện theo quy định về chế độ tiền lương và các chế độ, chính sách đối với viên chức và người lao động.\n... - Biệt phái công chức\n...\n6. Không thực hiện biệt phái công chức nữ đang mang thai hoặc nuôi con dưới 36 tháng tuổi. - source_sentence: Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức, viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào? sentences: - Nhiệm vụ của giáo viên\n1. Thực hiện nhiệm vụ tổ chức các hoạt động dạy học, giáo dục theo kế hoạch giáo dục của nhà trường và kế hoạch giáo dục của tổ chuyên môn; quản lý học sinh trong các hoạt động giáo dục do nhà trường tổ chức; tham gia các hoạt động chuyên môn; chịu trách nhiệm về chất lượng, hiệu quả giáo dục.\n2. Trau dồi đạo đức, nêu cao tinh thần trách nhiệm, giữ gìn phẩm chất, danh dự, uy tín của nhà giáo; gương mẫu trước học sinh; thương yêu, đối xử công bằng và tôn trọng nhân cách của học sinh; bảo vệ các quyền và lợi ích chính đáng của học sinh; đoàn kết, giúp đỡ đồng nghiệp.\n3. Học tập, rèn luyện để nâng cao sức khỏe, trình độ chính trị, chuyên môn, nghiệp vụ, đổi mới phương pháp dạy học, giáo dục.\n4. Tham gia tập huấn, bồi dưỡng chuyên môn, nghiệp vụ.\n5. Tham gia công tác phổ cập giáo dục trung học cơ sở ở địa phương.\n6. Thực hiện nghĩa vụ công dân, các quy định của pháp luật và của ngành Giáo dục, các quyết định của hiệu trưởng; thực hiện nhiệm vụ do hiệu trưởng phân công, chịu sự kiểm tra, đánh giá của hiệu trưởng và các cấp quản lý giáo dục.\n7. Phối hợp với Đội Thiếu niên Tiền phong Hồ Chí Minh, Đoàn Thanh niên Cộng sản Hồ Chí Minh, Hội Liên hiệp Thanh niên Việt Nam, gia đình học sinh và các tổ chức xã hội liên quan để tổ chức hoạt động giáo dục.\n8. Thực hiện các nhiệm vụ khác theo quy định của pháp luật. - “Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \n1. Máy PCR. \n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \n3. Test kít xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \n4. Máy thở chức năng cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng cao, máy thở xách tay. \n5. Máy lọc máu liên tục. \n6. Máy X-Quang di động. \n7. Máy đo khí máu (đo được điện giải, lactat, hematocrite). \n8. Máy theo dõi bệnh nhân>5 thông số. \n9. Bơm tiêm điện; Bơm truyền dịch. \n10. Máy phá rung tim có tạo nhịp. \n11. Máy đo thời gian đông máu. \n12. Máy đo huyết động.” - Thời điểm đánh giá xếp loại chất lượng hằng năm\n...\n2. Căn cứ tình hình thực tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\n3. Tại thời điểm đánh giá, xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá, xếp loại chất lượng theo quy định của pháp luật và Quy chế này. --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("legalvn/paraphrase-multilingual-MiniLM-L12-v2-166000") # Run inference sentences = [ 'Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức, viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?', 'Thời điểm đánh giá xếp loại chất lượng hằng năm\\n...\\n2. Căn cứ tình hình thực tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\\n3. Tại thời điểm đánh giá, xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá, xếp loại chất lượng theo quy định của pháp luật và Quy chế này.', '“Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \\n1. Máy PCR. \\n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \\n3. Test kít xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \\n4. Máy thở chức năng cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng cao, máy thở xách tay. \\n5. Máy lọc máu liên tục. \\n6. Máy X-Quang di động. \\n7. Máy đo khí máu (đo được điện giải, lactat, hematocrite). \\n8. Máy theo dõi bệnh nhân>5 thông số. \\n9. Bơm tiêm điện; Bơm truyền dịch. \\n10. Máy phá rung tim có tạo nhịp. \\n11. Máy đo thời gian đông máu. \\n12. Máy đo huyết động.”', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 651,725 training samples * Columns: <code>queries</code>, <code>corpus</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | queries | corpus | score | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 9 tokens</li><li>mean: 24.71 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 121.6 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~43.80%</li><li>1: ~37.00%</li><li>2: ~19.20%</li></ul> | * Samples: | queries | corpus | score | |:------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Người học ngành quản lý khai thác công trình thủy lợi trình độ cao đẳng phải có khả năng học tập và nâng cao trình độ như thế nào?</code> | <code>Khả năng học tập, nâng cao trình độ\n- Khối lượng khối lượng kiến thức tối thiểu, yêu cầu về năng lực mà người học phải đạt được sau khi tốt nghiệp ngành, nghề Dược trình độ cao đẳng có thể tiếp tục phát triển ở các trình độ cao hơn;\n- Người học sau tốt nghiệp có năng lực tự học, tự cập nhật những tiến bộ khoa học công nghệ trong phạm vi ngành, nghề để nâng cao trình độ hoặc học liên thông lên trình độ cao hơn trong cùng ngành nghề hoặc trong nhóm ngành, nghề hoặc trong cùng lĩnh vực đào tạo.</code> | <code>2</code> | | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nTrong phạm vi điều chỉnh của văn bản quy phạm pháp luật:\n1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.\n2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.\n3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> | <code>2</code> | | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Mục đích lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nLồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật (sau đây gọi tắt là văn bản) là một biện pháp để thực hiện mục tiêu bình đẳng giới, xóa bỏ phân biệt đối xử về giới, bảo đảm quyền, lợi ích hợp pháp, phù hợp với đặc thù của mỗi giới; tạo cơ hội phát triển như nhau cho nam và nữ trong các lĩnh vực của đời sống xã hội và gia đình; bảo đảm bình đẳng giới thực chất giữa nam và nữ.</code> | <code>1</code> | * Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss) ### Training Hyperparameters #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3.0 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:------:|:------:|:-------------:| | 0.0061 | 500 | 1.0473 | | 0.0123 | 1000 | 1.0447 | | 0.0184 | 1500 | 1.0383 | | 0.0246 | 2000 | 1.0395 | | 0.0307 | 2500 | 1.0436 | | 0.0368 | 3000 | 1.0375 | | 0.0430 | 3500 | 1.0189 | | 0.0491 | 4000 | 1.0282 | | 0.0552 | 4500 | 1.0355 | | 0.0614 | 5000 | 1.0286 | | 0.0675 | 5500 | 1.0264 | | 0.0737 | 6000 | 1.0174 | | 0.0798 | 6500 | 1.0238 | | 0.0859 | 7000 | 1.0217 | | 0.0921 | 7500 | 1.0203 | | 0.0982 | 8000 | 1.0201 | | 0.1043 | 8500 | 1.0266 | | 0.1105 | 9000 | 1.0379 | | 0.1166 | 9500 | 1.0367 | | 0.1228 | 10000 | 1.0384 | | 0.1289 | 10500 | 1.0291 | | 0.1350 | 11000 | 1.0362 | | 0.1412 | 11500 | 1.0354 | | 0.1473 | 12000 | 1.0204 | | 0.1534 | 12500 | 1.0401 | | 0.1596 | 13000 | 1.0237 | | 0.1657 | 13500 | 1.0271 | | 0.1719 | 14000 | 1.0235 | | 0.1780 | 14500 | 1.0329 | | 0.1841 | 15000 | 1.0474 | | 0.1903 | 15500 | 1.0547 | | 0.1964 | 16000 | 1.0557 | | 0.2025 | 16500 | 1.0626 | | 0.2087 | 17000 | 1.0551 | | 0.2148 | 17500 | 1.0526 | | 0.2210 | 18000 | 1.125 | | 0.2271 | 18500 | 1.2996 | | 0.2332 | 19000 | 1.0703 | | 0.2394 | 19500 | 1.0601 | | 0.2455 | 20000 | 1.0835 | | 0.2516 | 20500 | 1.0583 | | 0.2578 | 21000 | 1.141 | | 0.2639 | 21500 | 1.0802 | | 0.2701 | 22000 | 1.0589 | | 0.2762 | 22500 | 1.086 | | 0.2823 | 23000 | 1.0743 | | 0.2885 | 23500 | 1.0605 | | 0.2946 | 24000 | 1.0602 | | 0.3007 | 24500 | 1.0732 | | 0.3069 | 25000 | 1.0614 | | 0.3130 | 25500 | 1.0666 | | 0.3192 | 26000 | 1.0669 | | 0.3253 | 26500 | 1.0627 | | 0.3314 | 27000 | 1.0659 | | 0.3376 | 27500 | 1.07 | | 0.3437 | 28000 | 1.0783 | | 0.3498 | 28500 | 1.078 | | 0.3560 | 29000 | 1.0832 | | 0.3621 | 29500 | 1.0695 | | 0.3683 | 30000 | 1.0714 | | 0.3744 | 30500 | 1.3794 | | 0.3805 | 31000 | 1.0838 | | 0.3867 | 31500 | 1.0541 | | 0.3928 | 32000 | 1.0799 | | 0.3989 | 32500 | 1.0622 | | 0.4051 | 33000 | 1.0597 | | 0.4112 | 33500 | 1.0731 | | 0.4174 | 34000 | 1.0871 | | 0.4235 | 34500 | 1.0535 | | 0.4296 | 35000 | 1.3215 | | 0.4358 | 35500 | 1.1501 | | 0.4419 | 36000 | 1.1088 | | 0.4480 | 36500 | 1.0844 | | 0.4542 | 37000 | 1.0981 | | 0.4603 | 37500 | 1.0856 | | 0.4665 | 38000 | 1.0956 | | 0.4726 | 38500 | 1.0813 | | 0.4787 | 39000 | 1.0843 | | 0.4849 | 39500 | 1.1053 | | 0.4910 | 40000 | 1.092 | | 0.4971 | 40500 | 1.081 | | 0.5033 | 41000 | 1.0919 | | 0.5094 | 41500 | 1.0681 | | 0.5156 | 42000 | 1.0826 | | 0.5217 | 42500 | 1.0809 | | 0.5278 | 43000 | 1.093 | | 0.5340 | 43500 | 1.0709 | | 0.5401 | 44000 | 1.0623 | | 0.5462 | 44500 | 1.0801 | | 0.5524 | 45000 | 1.0833 | | 0.5585 | 45500 | 1.0816 | | 0.5647 | 46000 | 1.0697 | | 0.5708 | 46500 | 1.0864 | | 0.5769 | 47000 | 1.0744 | | 0.5831 | 47500 | 1.0897 | | 0.5892 | 48000 | 1.0727 | | 0.5953 | 48500 | 1.0621 | | 0.6015 | 49000 | 1.0582 | | 0.6076 | 49500 | 1.0681 | | 0.6138 | 50000 | 1.083 | | 0.6199 | 50500 | 1.0632 | | 0.6260 | 51000 | 1.0809 | | 0.6322 | 51500 | 1.0525 | | 0.6383 | 52000 | 1.6649 | | 0.6444 | 52500 | 1.0873 | | 0.6506 | 53000 | 1.0649 | | 0.6567 | 53500 | 1.0591 | | 0.6629 | 54000 | 1.061 | | 0.6690 | 54500 | 1.0682 | | 0.6751 | 55000 | 1.0616 | | 0.6813 | 55500 | 1.0827 | | 0.6874 | 56000 | 1.0799 | | 0.6935 | 56500 | 1.0705 | | 0.6997 | 57000 | 1.0821 | | 0.7058 | 57500 | 1.0763 | | 0.7120 | 58000 | 1.0842 | | 0.7181 | 58500 | 1.0813 | | 0.7242 | 59000 | 1.0678 | | 0.7304 | 59500 | 1.0894 | | 0.7365 | 60000 | 1.0733 | | 0.7426 | 60500 | 1.0688 | | 0.7488 | 61000 | 1.0665 | | 0.7549 | 61500 | 1.0681 | | 0.7611 | 62000 | 1.301 | | 0.7672 | 62500 | 1.0907 | | 0.7733 | 63000 | 1.3941 | | 0.7795 | 63500 | 1.1355 | | 0.7856 | 64000 | 1.2196 | | 0.7917 | 64500 | 1.225 | | 0.7979 | 65000 | 1.1437 | | 0.8040 | 65500 | 1.0787 | | 0.8102 | 66000 | 1.0686 | | 0.8163 | 66500 | 1.1017 | | 0.8224 | 67000 | 1.0999 | | 0.8286 | 67500 | 1.0771 | | 0.8347 | 68000 | 1.1015 | | 0.8408 | 68500 | 1.0826 | | 0.8470 | 69000 | 1.1046 | | 0.8531 | 69500 | 1.0735 | | 0.8593 | 70000 | 1.1056 | | 0.8654 | 70500 | 1.1077 | | 0.8715 | 71000 | 1.0897 | | 0.8777 | 71500 | 1.0775 | | 0.8838 | 72000 | 1.0907 | | 0.8899 | 72500 | 1.0705 | | 0.8961 | 73000 | 1.0776 | | 0.9022 | 73500 | 1.0896 | | 0.9084 | 74000 | 1.0889 | | 0.9145 | 74500 | 1.0804 | | 0.9206 | 75000 | 1.1087 | | 0.9268 | 75500 | 1.0738 | | 0.9329 | 76000 | 1.0806 | | 0.9390 | 76500 | 1.0899 | | 0.9452 | 77000 | 1.0814 | | 0.9513 | 77500 | 1.0723 | | 0.9575 | 78000 | 1.0923 | | 0.9636 | 78500 | 1.0748 | | 0.9697 | 79000 | 1.0745 | | 0.9759 | 79500 | 1.081 | | 0.9820 | 80000 | 1.08 | | 0.9881 | 80500 | 1.0905 | | 0.9943 | 81000 | 1.1064 | | 1.0004 | 81500 | 1.0929 | | 1.0066 | 82000 | 1.0815 | | 1.0127 | 82500 | 1.0768 | | 1.0188 | 83000 | 1.1004 | | 1.0250 | 83500 | 1.0835 | | 1.0311 | 84000 | 1.0765 | | 1.0372 | 84500 | 1.0906 | | 1.0434 | 85000 | 1.096 | | 1.0495 | 85500 | 1.1085 | | 1.0557 | 86000 | 1.0913 | | 1.0618 | 86500 | 1.0974 | | 1.0679 | 87000 | 1.0763 | | 1.0741 | 87500 | 1.0894 | | 1.0802 | 88000 | 1.1065 | | 1.0863 | 88500 | 1.0898 | | 1.0925 | 89000 | 1.1036 | | 1.0986 | 89500 | 1.0825 | | 1.1048 | 90000 | 1.1164 | | 1.1109 | 90500 | 1.0811 | | 1.1170 | 91000 | 1.115 | | 1.1232 | 91500 | 1.1123 | | 1.1293 | 92000 | 1.0846 | | 1.1354 | 92500 | 1.0917 | | 1.1416 | 93000 | 1.0879 | | 1.1477 | 93500 | 1.0969 | | 1.1539 | 94000 | 1.0849 | | 1.1600 | 94500 | 1.0852 | | 1.1661 | 95000 | 1.0774 | | 1.1723 | 95500 | 1.0984 | | 1.1784 | 96000 | 1.0936 | | 1.1845 | 96500 | 1.0842 | | 1.1907 | 97000 | 1.0895 | | 1.1968 | 97500 | 1.09 | | 1.2030 | 98000 | 1.0813 | | 1.2091 | 98500 | 1.0965 | | 1.2152 | 99000 | 1.1017 | | 1.2214 | 99500 | 1.1045 | | 1.2275 | 100000 | 1.093 | | 1.2336 | 100500 | 1.0903 | | 1.2398 | 101000 | 1.1133 | | 1.2459 | 101500 | 1.0883 | | 1.2521 | 102000 | 1.1192 | | 1.2582 | 102500 | 1.0817 | | 1.2643 | 103000 | 1.0822 | | 1.2705 | 103500 | 1.0915 | | 1.2766 | 104000 | 1.1128 | | 1.2827 | 104500 | 1.0786 | | 1.2889 | 105000 | 1.1101 | | 1.2950 | 105500 | 1.097 | | 1.3012 | 106000 | 1.095 | | 1.3073 | 106500 | 1.0884 | | 1.3134 | 107000 | 1.09 | | 1.3196 | 107500 | 1.1057 | | 1.3257 | 108000 | 1.087 | | 1.3318 | 108500 | 1.1009 | | 1.3380 | 109000 | 1.0849 | | 1.3441 | 109500 | 1.0886 | | 1.3503 | 110000 | 1.0805 | | 1.3564 | 110500 | 1.0808 | | 1.3625 | 111000 | 1.1025 | | 1.3687 | 111500 | 1.0955 | | 1.3748 | 112000 | 1.0824 | | 1.3809 | 112500 | 1.0835 | | 1.3871 | 113000 | 1.1168 | | 1.3932 | 113500 | 1.0881 | | 1.3994 | 114000 | 1.0946 | | 1.4055 | 114500 | 1.0819 | | 1.4116 | 115000 | 1.1155 | | 1.4178 | 115500 | 1.1021 | | 1.4239 | 116000 | 1.102 | | 1.4300 | 116500 | 1.0733 | | 1.4362 | 117000 | 1.0987 | | 1.4423 | 117500 | 1.1103 | | 1.4485 | 118000 | 1.1034 | | 1.4546 | 118500 | 1.0987 | | 1.4607 | 119000 | 1.0908 | | 1.4669 | 119500 | 1.0986 | | 1.4730 | 120000 | 1.0988 | | 1.4791 | 120500 | 1.1023 | | 1.4853 | 121000 | 1.1013 | | 1.4914 | 121500 | 1.0896 | | 1.4976 | 122000 | 1.8455 | | 1.5037 | 122500 | 1.1155 | | 1.5098 | 123000 | 1.1502 | | 1.5160 | 123500 | 1.1183 | | 1.5221 | 124000 | 1.0958 | | 1.5282 | 124500 | 1.1098 | | 1.5344 | 125000 | 1.1021 | | 1.5405 | 125500 | 1.0912 | | 1.5467 | 126000 | 1.0961 | | 1.5528 | 126500 | 1.0858 | | 1.5589 | 127000 | 1.0784 | | 1.5651 | 127500 | 1.1112 | | 1.5712 | 128000 | 1.1067 | | 1.5773 | 128500 | 1.0986 | | 1.5835 | 129000 | 1.0824 | | 1.5896 | 129500 | 1.1072 | | 1.5958 | 130000 | 1.1098 | | 1.6019 | 130500 | 1.0962 | | 1.6080 | 131000 | 1.1108 | | 1.6142 | 131500 | 1.1187 | | 1.6203 | 132000 | 1.0923 | | 1.6264 | 132500 | 1.1003 | | 1.6326 | 133000 | 1.0865 | | 1.6387 | 133500 | 1.099 | | 1.6449 | 134000 | 1.0838 | | 1.6510 | 134500 | 1.0792 | | 1.6571 | 135000 | 1.0966 | | 1.6633 | 135500 | 1.0782 | | 1.6694 | 136000 | 1.1123 | | 1.6755 | 136500 | 1.0923 | | 1.6817 | 137000 | 1.0873 | | 1.6878 | 137500 | 1.0807 | | 1.6940 | 138000 | 1.083 | | 1.7001 | 138500 | 1.0864 | | 1.7062 | 139000 | 1.0828 | | 1.7124 | 139500 | 1.0973 | | 1.7185 | 140000 | 1.1022 | | 1.7246 | 140500 | 1.0837 | | 1.7308 | 141000 | 1.0985 | | 1.7369 | 141500 | 1.1049 | | 1.7431 | 142000 | 1.079 | | 1.7492 | 142500 | 1.0757 | | 1.7553 | 143000 | 1.0808 | | 1.7615 | 143500 | 1.0743 | | 1.7676 | 144000 | 1.0933 | | 1.7737 | 144500 | 1.0938 | | 1.7799 | 145000 | 1.1121 | | 1.7860 | 145500 | 1.1138 | | 1.7922 | 146000 | 1.1063 | | 1.7983 | 146500 | 1.097 | | 1.8044 | 147000 | 1.0999 | | 1.8106 | 147500 | 1.1035 | | 1.8167 | 148000 | 1.0786 | | 1.8228 | 148500 | 1.0824 | | 1.8290 | 149000 | 1.1097 | | 1.8351 | 149500 | 1.0744 | | 1.8413 | 150000 | 1.0902 | | 1.8474 | 150500 | 1.0841 | | 1.8535 | 151000 | 1.0961 | | 1.8597 | 151500 | 1.0778 | | 1.8658 | 152000 | 1.0784 | | 1.8719 | 152500 | 1.0741 | | 1.8781 | 153000 | 1.0879 | | 1.8842 | 153500 | 1.079 | | 1.8904 | 154000 | 1.0967 | | 1.8965 | 154500 | 1.0906 | | 1.9026 | 155000 | 1.0836 | | 1.9088 | 155500 | 1.0932 | | 1.9149 | 156000 | 1.0823 | | 1.9210 | 156500 | 1.087 | | 1.9272 | 157000 | 1.0892 | | 1.9333 | 157500 | 1.0842 | | 1.9395 | 158000 | 1.0837 | | 1.9456 | 158500 | 1.1001 | | 1.9517 | 159000 | 1.0727 | | 1.9579 | 159500 | 1.0875 | | 1.9640 | 160000 | 1.0845 | | 1.9701 | 160500 | 1.0805 | | 1.9763 | 161000 | 1.0825 | | 1.9824 | 161500 | 1.0886 | | 1.9886 | 162000 | 1.0856 | | 1.9947 | 162500 | 1.0816 | | 2.0008 | 163000 | 1.1005 | | 2.0070 | 163500 | 1.0775 | | 2.0131 | 164000 | 1.0875 | | 2.0192 | 164500 | 1.09 | | 2.0254 | 165000 | 1.086 | | 2.0315 | 165500 | 1.087 | | 2.0377 | 166000 | 1.0815 | </details> ### Framework Versions - Python: 3.10.10 - Sentence Transformers: 3.3.1 - Transformers: 4.43.0 - PyTorch: 2.5.0+cu124 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers and SoftmaxLoss ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "PCR" ]
Non_BioNLP
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("legalvn/paraphrase-multilingual-MiniLM-L12-v2-166000") # Run inference sentences = [ 'Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức, viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?', 'Thời điểm đánh giá xếp loại chất lượng hằng năm\\n...\\n2. Căn cứ tình hình thực tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\\n3. Tại thời điểm đánh giá, xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá, xếp loại chất lượng theo quy định của pháp luật và Quy chế này.', '“Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \\n1. Máy PCR. \\n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \\n3. Test kít xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \\n4. Máy thở chức năng cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng cao, máy thở xách tay. \\n5. Máy lọc máu liên tục. \\n6. Máy X-Quang di động. \\n7. Máy đo khí máu (đo được điện giải, lactat, hematocrite). \\n8. Máy theo dõi bệnh nhân>5 thông số. \\n9. Bơm tiêm điện; Bơm truyền dịch. \\n10. Máy phá rung tim có tạo nhịp. \\n11. Máy đo thời gian đông máu. \\n12. Máy đo huyết động.”', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 651,725 training samples * Columns: <code>queries</code>, <code>corpus</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | queries | corpus | score | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 9 tokens</li><li>mean: 24.71 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 121.6 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~43.80%</li><li>1: ~37.00%</li><li>2: ~19.20%</li></ul> | * Samples: | queries | corpus | score | |:------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Người học ngành quản lý khai thác công trình thủy lợi trình độ cao đẳng phải có khả năng học tập và nâng cao trình độ như thế nào?</code> | <code>Khả năng học tập, nâng cao trình độ\n- Khối lượng khối lượng kiến thức tối thiểu, yêu cầu về năng lực mà người học phải đạt được sau khi tốt nghiệp ngành, nghề Dược trình độ cao đẳng có thể tiếp tục phát triển ở các trình độ cao hơn;\n- Người học sau tốt nghiệp có năng lực tự học, tự cập nhật những tiến bộ khoa học công nghệ trong phạm vi ngành, nghề để nâng cao trình độ hoặc học liên thông lên trình độ cao hơn trong cùng ngành nghề hoặc trong nhóm ngành, nghề hoặc trong cùng lĩnh vực đào tạo.</code> | <code>2</code> | | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nTrong phạm vi điều chỉnh của văn bản quy phạm pháp luật:\n1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.\n2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.\n3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> | <code>2</code> | | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Mục đích lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nLồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật (sau đây gọi tắt là văn bản) là một biện pháp để thực hiện mục tiêu bình đẳng giới, xóa bỏ phân biệt đối xử về giới, bảo đảm quyền, lợi ích hợp pháp, phù hợp với đặc thù của mỗi giới; tạo cơ hội phát triển như nhau cho nam và nữ trong các lĩnh vực của đời sống xã hội và gia đình; bảo đảm bình đẳng giới thực chất giữa nam và nữ.</code> | <code>1</code> | * Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss) ### Training Hyperparameters #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3.0 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:------:|:------:|:-------------:| | 0.0061 | 500 | 1.0473 | | 0.0123 | 1000 | 1.0447 | | 0.0184 | 1500 | 1.0383 | | 0.0246 | 2000 | 1.0395 | | 0.0307 | 2500 | 1.0436 | | 0.0368 | 3000 | 1.0375 | | 0.0430 | 3500 | 1.0189 | | 0.0491 | 4000 | 1.0282 | | 0.0552 | 4500 | 1.0355 | | 0.0614 | 5000 | 1.0286 | | 0.0675 | 5500 | 1.0264 | | 0.0737 | 6000 | 1.0174 | | 0.0798 | 6500 | 1.0238 | | 0.0859 | 7000 | 1.0217 | | 0.0921 | 7500 | 1.0203 | | 0.0982 | 8000 | 1.0201 | | 0.1043 | 8500 | 1.0266 | | 0.1105 | 9000 | 1.0379 | | 0.1166 | 9500 | 1.0367 | | 0.1228 | 10000 | 1.0384 | | 0.1289 | 10500 | 1.0291 | | 0.1350 | 11000 | 1.0362 | | 0.1412 | 11500 | 1.0354 | | 0.1473 | 12000 | 1.0204 | | 0.1534 | 12500 | 1.0401 | | 0.1596 | 13000 | 1.0237 | | 0.1657 | 13500 | 1.0271 | | 0.1719 | 14000 | 1.0235 | | 0.1780 | 14500 | 1.0329 | | 0.1841 | 15000 | 1.0474 | | 0.1903 | 15500 | 1.0547 | | 0.1964 | 16000 | 1.0557 | | 0.2025 | 16500 | 1.0626 | | 0.2087 | 17000 | 1.0551 | | 0.2148 | 17500 | 1.0526 | | 0.2210 | 18000 | 1.125 | | 0.2271 | 18500 | 1.2996 | | 0.2332 | 19000 | 1.0703 | | 0.2394 | 19500 | 1.0601 | | 0.2455 | 20000 | 1.0835 | | 0.2516 | 20500 | 1.0583 | | 0.2578 | 21000 | 1.141 | | 0.2639 | 21500 | 1.0802 | | 0.2701 | 22000 | 1.0589 | | 0.2762 | 22500 | 1.086 | | 0.2823 | 23000 | 1.0743 | | 0.2885 | 23500 | 1.0605 | | 0.2946 | 24000 | 1.0602 | | 0.3007 | 24500 | 1.0732 | | 0.3069 | 25000 | 1.0614 | | 0.3130 | 25500 | 1.0666 | | 0.3192 | 26000 | 1.0669 | | 0.3253 | 26500 | 1.0627 | | 0.3314 | 27000 | 1.0659 | | 0.3376 | 27500 | 1.07 | | 0.3437 | 28000 | 1.0783 | | 0.3498 | 28500 | 1.078 | | 0.3560 | 29000 | 1.0832 | | 0.3621 | 29500 | 1.0695 | | 0.3683 | 30000 | 1.0714 | | 0.3744 | 30500 | 1.3794 | | 0.3805 | 31000 | 1.0838 | | 0.3867 | 31500 | 1.0541 | | 0.3928 | 32000 | 1.0799 | | 0.3989 | 32500 | 1.0622 | | 0.4051 | 33000 | 1.0597 | | 0.4112 | 33500 | 1.0731 | | 0.4174 | 34000 | 1.0871 | | 0.4235 | 34500 | 1.0535 | | 0.4296 | 35000 | 1.3215 | | 0.4358 | 35500 | 1.1501 | | 0.4419 | 36000 | 1.1088 | | 0.4480 | 36500 | 1.0844 | | 0.4542 | 37000 | 1.0981 | | 0.4603 | 37500 | 1.0856 | | 0.4665 | 38000 | 1.0956 | | 0.4726 | 38500 | 1.0813 | | 0.4787 | 39000 | 1.0843 | | 0.4849 | 39500 | 1.1053 | | 0.4910 | 40000 | 1.092 | | 0.4971 | 40500 | 1.081 | | 0.5033 | 41000 | 1.0919 | | 0.5094 | 41500 | 1.0681 | | 0.5156 | 42000 | 1.0826 | | 0.5217 | 42500 | 1.0809 | | 0.5278 | 43000 | 1.093 | | 0.5340 | 43500 | 1.0709 | | 0.5401 | 44000 | 1.0623 | | 0.5462 | 44500 | 1.0801 | | 0.5524 | 45000 | 1.0833 | | 0.5585 | 45500 | 1.0816 | | 0.5647 | 46000 | 1.0697 | | 0.5708 | 46500 | 1.0864 | | 0.5769 | 47000 | 1.0744 | | 0.5831 | 47500 | 1.0897 | | 0.5892 | 48000 | 1.0727 | | 0.5953 | 48500 | 1.0621 | | 0.6015 | 49000 | 1.0582 | | 0.6076 | 49500 | 1.0681 | | 0.6138 | 50000 | 1.083 | | 0.6199 | 50500 | 1.0632 | | 0.6260 | 51000 | 1.0809 | | 0.6322 | 51500 | 1.0525 | | 0.6383 | 52000 | 1.6649 | | 0.6444 | 52500 | 1.0873 | | 0.6506 | 53000 | 1.0649 | | 0.6567 | 53500 | 1.0591 | | 0.6629 | 54000 | 1.061 | | 0.6690 | 54500 | 1.0682 | | 0.6751 | 55000 | 1.0616 | | 0.6813 | 55500 | 1.0827 | | 0.6874 | 56000 | 1.0799 | | 0.6935 | 56500 | 1.0705 | | 0.6997 | 57000 | 1.0821 | | 0.7058 | 57500 | 1.0763 | | 0.7120 | 58000 | 1.0842 | | 0.7181 | 58500 | 1.0813 | | 0.7242 | 59000 | 1.0678 | | 0.7304 | 59500 | 1.0894 | | 0.7365 | 60000 | 1.0733 | | 0.7426 | 60500 | 1.0688 | | 0.7488 | 61000 | 1.0665 | | 0.7549 | 61500 | 1.0681 | | 0.7611 | 62000 | 1.301 | | 0.7672 | 62500 | 1.0907 | | 0.7733 | 63000 | 1.3941 | | 0.7795 | 63500 | 1.1355 | | 0.7856 | 64000 | 1.2196 | | 0.7917 | 64500 | 1.225 | | 0.7979 | 65000 | 1.1437 | | 0.8040 | 65500 | 1.0787 | | 0.8102 | 66000 | 1.0686 | | 0.8163 | 66500 | 1.1017 | | 0.8224 | 67000 | 1.0999 | | 0.8286 | 67500 | 1.0771 | | 0.8347 | 68000 | 1.1015 | | 0.8408 | 68500 | 1.0826 | | 0.8470 | 69000 | 1.1046 | | 0.8531 | 69500 | 1.0735 | | 0.8593 | 70000 | 1.1056 | | 0.8654 | 70500 | 1.1077 | | 0.8715 | 71000 | 1.0897 | | 0.8777 | 71500 | 1.0775 | | 0.8838 | 72000 | 1.0907 | | 0.8899 | 72500 | 1.0705 | | 0.8961 | 73000 | 1.0776 | | 0.9022 | 73500 | 1.0896 | | 0.9084 | 74000 | 1.0889 | | 0.9145 | 74500 | 1.0804 | | 0.9206 | 75000 | 1.1087 | | 0.9268 | 75500 | 1.0738 | | 0.9329 | 76000 | 1.0806 | | 0.9390 | 76500 | 1.0899 | | 0.9452 | 77000 | 1.0814 | | 0.9513 | 77500 | 1.0723 | | 0.9575 | 78000 | 1.0923 | | 0.9636 | 78500 | 1.0748 | | 0.9697 | 79000 | 1.0745 | | 0.9759 | 79500 | 1.081 | | 0.9820 | 80000 | 1.08 | | 0.9881 | 80500 | 1.0905 | | 0.9943 | 81000 | 1.1064 | | 1.0004 | 81500 | 1.0929 | | 1.0066 | 82000 | 1.0815 | | 1.0127 | 82500 | 1.0768 | | 1.0188 | 83000 | 1.1004 | | 1.0250 | 83500 | 1.0835 | | 1.0311 | 84000 | 1.0765 | | 1.0372 | 84500 | 1.0906 | | 1.0434 | 85000 | 1.096 | | 1.0495 | 85500 | 1.1085 | | 1.0557 | 86000 | 1.0913 | | 1.0618 | 86500 | 1.0974 | | 1.0679 | 87000 | 1.0763 | | 1.0741 | 87500 | 1.0894 | | 1.0802 | 88000 | 1.1065 | | 1.0863 | 88500 | 1.0898 | | 1.0925 | 89000 | 1.1036 | | 1.0986 | 89500 | 1.0825 | | 1.1048 | 90000 | 1.1164 | | 1.1109 | 90500 | 1.0811 | | 1.1170 | 91000 | 1.115 | | 1.1232 | 91500 | 1.1123 | | 1.1293 | 92000 | 1.0846 | | 1.1354 | 92500 | 1.0917 | | 1.1416 | 93000 | 1.0879 | | 1.1477 | 93500 | 1.0969 | | 1.1539 | 94000 | 1.0849 | | 1.1600 | 94500 | 1.0852 | | 1.1661 | 95000 | 1.0774 | | 1.1723 | 95500 | 1.0984 | | 1.1784 | 96000 | 1.0936 | | 1.1845 | 96500 | 1.0842 | | 1.1907 | 97000 | 1.0895 | | 1.1968 | 97500 | 1.09 | | 1.2030 | 98000 | 1.0813 | | 1.2091 | 98500 | 1.0965 | | 1.2152 | 99000 | 1.1017 | | 1.2214 | 99500 | 1.1045 | | 1.2275 | 100000 | 1.093 | | 1.2336 | 100500 | 1.0903 | | 1.2398 | 101000 | 1.1133 | | 1.2459 | 101500 | 1.0883 | | 1.2521 | 102000 | 1.1192 | | 1.2582 | 102500 | 1.0817 | | 1.2643 | 103000 | 1.0822 | | 1.2705 | 103500 | 1.0915 | | 1.2766 | 104000 | 1.1128 | | 1.2827 | 104500 | 1.0786 | | 1.2889 | 105000 | 1.1101 | | 1.2950 | 105500 | 1.097 | | 1.3012 | 106000 | 1.095 | | 1.3073 | 106500 | 1.0884 | | 1.3134 | 107000 | 1.09 | | 1.3196 | 107500 | 1.1057 | | 1.3257 | 108000 | 1.087 | | 1.3318 | 108500 | 1.1009 | | 1.3380 | 109000 | 1.0849 | | 1.3441 | 109500 | 1.0886 | | 1.3503 | 110000 | 1.0805 | | 1.3564 | 110500 | 1.0808 | | 1.3625 | 111000 | 1.1025 | | 1.3687 | 111500 | 1.0955 | | 1.3748 | 112000 | 1.0824 | | 1.3809 | 112500 | 1.0835 | | 1.3871 | 113000 | 1.1168 | | 1.3932 | 113500 | 1.0881 | | 1.3994 | 114000 | 1.0946 | | 1.4055 | 114500 | 1.0819 | | 1.4116 | 115000 | 1.1155 | | 1.4178 | 115500 | 1.1021 | | 1.4239 | 116000 | 1.102 | | 1.4300 | 116500 | 1.0733 | | 1.4362 | 117000 | 1.0987 | | 1.4423 | 117500 | 1.1103 | | 1.4485 | 118000 | 1.1034 | | 1.4546 | 118500 | 1.0987 | | 1.4607 | 119000 | 1.0908 | | 1.4669 | 119500 | 1.0986 | | 1.4730 | 120000 | 1.0988 | | 1.4791 | 120500 | 1.1023 | | 1.4853 | 121000 | 1.1013 | | 1.4914 | 121500 | 1.0896 | | 1.4976 | 122000 | 1.8455 | | 1.5037 | 122500 | 1.1155 | | 1.5098 | 123000 | 1.1502 | | 1.5160 | 123500 | 1.1183 | | 1.5221 | 124000 | 1.0958 | | 1.5282 | 124500 | 1.1098 | | 1.5344 | 125000 | 1.1021 | | 1.5405 | 125500 | 1.0912 | | 1.5467 | 126000 | 1.0961 | | 1.5528 | 126500 | 1.0858 | | 1.5589 | 127000 | 1.0784 | | 1.5651 | 127500 | 1.1112 | | 1.5712 | 128000 | 1.1067 | | 1.5773 | 128500 | 1.0986 | | 1.5835 | 129000 | 1.0824 | | 1.5896 | 129500 | 1.1072 | | 1.5958 | 130000 | 1.1098 | | 1.6019 | 130500 | 1.0962 | | 1.6080 | 131000 | 1.1108 | | 1.6142 | 131500 | 1.1187 | | 1.6203 | 132000 | 1.0923 | | 1.6264 | 132500 | 1.1003 | | 1.6326 | 133000 | 1.0865 | | 1.6387 | 133500 | 1.099 | | 1.6449 | 134000 | 1.0838 | | 1.6510 | 134500 | 1.0792 | | 1.6571 | 135000 | 1.0966 | | 1.6633 | 135500 | 1.0782 | | 1.6694 | 136000 | 1.1123 | | 1.6755 | 136500 | 1.0923 | | 1.6817 | 137000 | 1.0873 | | 1.6878 | 137500 | 1.0807 | | 1.6940 | 138000 | 1.083 | | 1.7001 | 138500 | 1.0864 | | 1.7062 | 139000 | 1.0828 | | 1.7124 | 139500 | 1.0973 | | 1.7185 | 140000 | 1.1022 | | 1.7246 | 140500 | 1.0837 | | 1.7308 | 141000 | 1.0985 | | 1.7369 | 141500 | 1.1049 | | 1.7431 | 142000 | 1.079 | | 1.7492 | 142500 | 1.0757 | | 1.7553 | 143000 | 1.0808 | | 1.7615 | 143500 | 1.0743 | | 1.7676 | 144000 | 1.0933 | | 1.7737 | 144500 | 1.0938 | | 1.7799 | 145000 | 1.1121 | | 1.7860 | 145500 | 1.1138 | | 1.7922 | 146000 | 1.1063 | | 1.7983 | 146500 | 1.097 | | 1.8044 | 147000 | 1.0999 | | 1.8106 | 147500 | 1.1035 | | 1.8167 | 148000 | 1.0786 | | 1.8228 | 148500 | 1.0824 | | 1.8290 | 149000 | 1.1097 | | 1.8351 | 149500 | 1.0744 | | 1.8413 | 150000 | 1.0902 | | 1.8474 | 150500 | 1.0841 | | 1.8535 | 151000 | 1.0961 | | 1.8597 | 151500 | 1.0778 | | 1.8658 | 152000 | 1.0784 | | 1.8719 | 152500 | 1.0741 | | 1.8781 | 153000 | 1.0879 | | 1.8842 | 153500 | 1.079 | | 1.8904 | 154000 | 1.0967 | | 1.8965 | 154500 | 1.0906 | | 1.9026 | 155000 | 1.0836 | | 1.9088 | 155500 | 1.0932 | | 1.9149 | 156000 | 1.0823 | | 1.9210 | 156500 | 1.087 | | 1.9272 | 157000 | 1.0892 | | 1.9333 | 157500 | 1.0842 | | 1.9395 | 158000 | 1.0837 | | 1.9456 | 158500 | 1.1001 | | 1.9517 | 159000 | 1.0727 | | 1.9579 | 159500 | 1.0875 | | 1.9640 | 160000 | 1.0845 | | 1.9701 | 160500 | 1.0805 | | 1.9763 | 161000 | 1.0825 | | 1.9824 | 161500 | 1.0886 | | 1.9886 | 162000 | 1.0856 | | 1.9947 | 162500 | 1.0816 | | 2.0008 | 163000 | 1.1005 | | 2.0070 | 163500 | 1.0775 | | 2.0131 | 164000 | 1.0875 | | 2.0192 | 164500 | 1.09 | | 2.0254 | 165000 | 1.086 | | 2.0315 | 165500 | 1.087 | | 2.0377 | 166000 | 1.0815 | </details> ### Framework Versions - Python: 3.10.10 - Sentence Transformers: 3.3.1 - Transformers: 4.43.0 - PyTorch: 2.5.0+cu124 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers and SoftmaxLoss ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:651725", "loss:SoftmaxLoss"], "widget": [{"source_sentence": "Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào?", "sentences": ["Chu kỳ kiểm định chất lượng giáo dục nghề nghiệp\\n...\\n2. Trường hợp cơ sở giáo dục nghề nghiệp có ngành, nghề trọng điểm; chương trình đào tạo ngành, nghề trọng điểm; cơ sở giáo dục nghề nghiệp và chương trình đào tạo các ngành, nghề phục vụ yêu cầu công tác quản lý nhà nước phải thực hiện kiểm định chất lượng giáo dục nghề nghiệp theo quy định tại điểm d khoản 3 Điều 65 của Luật Giáo dục nghề nghiệp số 74/2014/QH13 ngày 27 tháng 11 năm 2014 nhưng không đạt tiêu chuẩn kiểm định chất lượng giáo dục nghề nghiệp thì trong thời hạn 03 năm phải thực hiện kiểm định lại.", "Vệ sinh môi trường, vệ sinh tòa nhà\\n1. Trách nhiệm của các đơn vị, cán bộ, công chức, viên chức, nhân viên và người lao động trong việc giữ gìn vệ sinh tại nơi làm việc và khu vực công cộng:\\na) Hàng ngày tự vệ sinh sàn nhà, bàn ghế, tủ, các thiết bị được trang cấp và tổng vệ sinh phòng làm việc vào chiều thứ Sáu hàng tuần;\\nb) Có trách nhiệm thu gom rác thải trong phòng chuyển ra thùng rác đặt tại các hành lang;\\nc) Không đổ nước chè, cà phê, ….. xuống sàn nhà, hành lang, tường nhà và khu vệ sinh;\\nd) Nghiêm cấp hút thuốc lá trong phòng làm việc, phòng họp, cầu thang máy, cầu thang bộ, tầng hầm;\\nđ) Không khạc nhổ, bôi bẩn lên tường, không vứt rác thải, gạt tàn thuốc lá, đầu lọc thuốc lá xuống sàn nhà và các khu vực công cộng;\\ne) Nghiêm cấm hái hoa, bẻ cành, dẫm lên thảm cỏ, nhổ cây trong khuôn viên cơ quan.\\ng) Nghiêm cấm mang chất độc hại vào cơ quan.\\n…", "Nguyên tắc áp dụng\\n1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một mức phụ cấp ưu đãi theo nghề cao nhất.\\n2. Công chức, viên chức đã hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày 22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số 64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này."]}, {"source_sentence": "Số lượng thành viên Hội đồng khoa học và đào tạo là bao nhiêu?", "sentences": ["Cấp Giấy chứng nhận chất lượng an toàn kỹ thuật và bảo vệ môi trường trong sản xuất, lắp ráp ô tô, rơ moóc và sơ mi rơ moóc\\n2.1. Trình tự thực hiện:\\na) Nộp hồ sơ TTHC:\\n- Cơ sở sản xuất lập hồ sơ kiểm tra xe cơ giới theo quy định và nộp đến Cục Đăng kiểm Việt Nam.\\nb) Giải quyết TTHC:\\n- Cục Đăng kiểm Việt Nam tiếp nhận và kiểm tra thành phần hồ sơ kiểm tra xe cơ giới: nếu hồ sơ không đầy đủ theo quy định thì hướng dẫn Cơ sở sản xuất hoàn thiện lại; Nếu hồ sơ đầy đủ theo quy định thì thống nhất về thời gian và địa điểm thực hiện đánh giá điều kiện kiểm tra chất lượng sản phẩm tại Cơ sở sản xuất;\\n- Cục Đăng kiểm Việt Nam tiến hành kiểm tra nội dung hồ sơ và thực hiện đánh giá điều kiện kiểm tra chất lượng sản phẩm tại Cơ sở sản xuất theo quy định: Nếu chưa đạt yêu cầu thì thông báo để Cơ sở sản xuất hoàn thiện lại; Nếu đạt yêu cầu thì cấp Giấy chứng nhận trong thời hạn 03 ngày làm việc kể từ ngày kết thúc kiểm tra, đánh giá hồ sơ đầy đủ, hợp lệ theo quy định và có kết quả đánh giá COP đạt yêu cầu;\\n- Cơ sở sản xuất nộp hồ sơ kiểm tra xe cơ giới và nhận kết quả trực tiếp tại trụ sở Cục Đăng kiểm Việt Nam hoặc qua hệ thống bưu chính hoặc qua hệ thống dịch vụ công trực tuyến hoặc qua hình thức phù hợp khác.\\n...", "Phiên họp Hội đồng khoa học\\n1. Hội đồng khoa học họp định kỳ 06 tháng/01 lần. Các phiên họp định kỳ phải có ít nhất 2/3 tổng số thành viên của Hội đồng khoa học tham dự.\\n2. Phiên họp đột xuất của Hội đồng khoa học được triệu tập theo quyết định của Chủ tịch và phải có trên 1/2 số thành viên của Hội đồng khoa học tham dự.\\n3. Viện trưởng VKSND tối cao tham dự phiên họp của Hội đồng khoa học khi thấy cần thiết.\\n4. Tùy thuộc vào nội dung chương trình phiên họp, Chủ tịch Hội đồng khoa học có thể quyết định mời các nhà khoa học trong và ngoài ngành KSND tham gia phiên họp.\\n5. Nội dung phiên họp, các tài liệu liên quan đến phiên họp của Hội đồng khoa học phải được thông báo hoặc chuyển cho các Thành viên chậm nhất là 3 ngày làm việc trước ngày họp, trừ trường hợp đột xuất.\\n6. Hội đồng khoa học thảo luận dân chủ, tập thể, công khai, quyết định theo đa số về những vấn đề thuộc nội dung phiên họp và những vấn đề do Chủ tịch Hội đồng khoa học nêu ra hoặc do các Thành viên đề nghị và được Chủ tịch Hội đồng khoa học chấp thuận.\\nChủ tịch Hội đồng khoa học chủ trì thảo luận và kết luận tại phiên họp. Đối với những vấn đề phức tạp còn nhiều ý kiến khác nhau, Hội đồng khoa học tiến hành biểu quyết. Những vấn đề được biểu quyết đạt trên 2/3 số phiếu của thành viên có mặt hoặc trên 50% tổng số thành viên Hội đồng được coi là ý kiến chính thức của Hội đồng khoa học. Các ý kiến khác được bảo lưu, ghi vào biên bản cuộc họp.", "Hồ sơ, thủ tục công nhận liệt sĩ\\n1. Người khi hy sinh đang thuộc quân đội, công an quản lý thì Bộ Quốc phòng, Bộ Công an chịu trách nhiệm:\\na) Hướng dẫn về quy trình lập hồ sơ đề nghị công nhận liệt sĩ theo quy định.\\nb) Có văn bản đề nghị kèm hồ sơ gửi Bộ Lao động - Thương binh và Xã hội thẩm định trong thời gian không quá 50 ngày kể từ ngày cơ quan, đơn vị trực tiếp quản lý người hy sinh xác lập, hoàn thiện các giấy tờ quy định tại Điều 17 Nghị định này."]}, {"source_sentence": "Ban Tài chính Văn phòng Kiểm toán nhà nước thực hiện những chức năng gì?", "sentences": ["Tiếp nhận hồ sơ và trả kết quả\\n...\\n2.2.4. Lao động nam hoặc người chồng của lao động nữ mang thai hộ nghỉ việc khi vợ sinh con: Bản sao giấy chứng sinh hoặc bản sao giấy khai sinh hoặc trích lục khai sinh của con; trường hợp sinh con phải phẫu thuật hoặc sinh con dưới 32 tuần tuổi mà giấy chứng sinh không thể hiện thì có thêm giấy tờ của cơ sở khám bệnh, chữa bệnh thể hiện việc sinh con phải phẫu thuật, sinh con dưới 32 tuần tuổi. Trường hợp con chết sau khi sinh mà chưa được cấp giấy chứng sinh thì thay bằng trích sao hoặc tóm tắt hồ sơ bệnh án hoặc giấy ra viện của người mẹ hoặc của lao động nữ mang thai hộ thể hiện con chết…", "Việc tự giám sát chất lượng dịch vụ viễn thông của doanh nghiệp viễn thông\\n1. Các doanh nghiệp viễn thông được Bộ Thông tin và Truyền thông cấp giấy phép kinh doanh dịch vụ viễn thông phải thường xuyên tự giám sát chất lượng dịch vụ đối với tất cả các dịch vụ thuộc “Danh mục dịch vụ viễn thông bắt buộc quản lý chất lượng” mà mình cung cấp.\\n2. Trong trường hợp dịch vụ mà mình cung cấp có sự cố thì doanh nghiệp viễn thông phải thực hiện báo cáo đột xuất như quy định tại Khoản 3 Điều 8 của Thông tư này.", "Cục Quản lý, giám sát bảo hiểm; Cục Quản lý Công sản; Cục Quản lý Giá; Cục Quản lý Nợ và Tài chính đối ngoại; Cục Quản lý, giám sát Kế toán, Kiểm toán; Cục Quản lý Công sản; Cục Tài chính doanh nghiệp và Vụ Tài chính ngân hàng chủ trì phối hợp với Cục Tin học & Thống kê Tài chính xây dựng quy trình điện tử từng thủ tục hành chính theo phạm vi quản lý đối với danh mục thủ tục hành chính để thực hiện tích hợp trên Hệ thống thông tin Một cửa điện tử của Bộ Tài chính."]}, {"source_sentence": "Điều kiện để Giám đốc Học viện An ninh nhân dân được thăng cấp bậc hàm trước thời hạn như thế nào?", "sentences": ["Mức độ tự chủ và trách nhiệm\\n- Có ý thức và tác phong nghề nghiệp đúng chuẩn mực, có năng lực thực hiện công việc được giao; phương pháp làm việc khoa học, biết phân tích và giải quyết các vấn đề mới về lĩnh vực chuyên môn nghề;\\n- Gắn bó nghề nghiệp; nghiêm chỉnh chấp hành quy chế, quy định của cơ quan, doanh nghiệp, nơi đang công tác với ý thức tổ chức kỉ luật và tinh thần trách nhiệm cao trong công việc;\\n- Lập được các biện pháp an toàn và đảm bảo an toàn, vệ sinh lao động trong quá trình làm việc; có ý thức trách nhiệm công dân, thái độ và đạo đức nghề nghiệp đúng đắn, sẵn sàng nhận nhiệm vụ; tự tin, cầu tiến trong công việc; hợp tác, thân thiện, khiêm tốn trong các mối quan hệ;\\n- Tự chịu trách nhiệm về chất lượng đối với kết quả công việc, sản phẩm do mình đảm nhiệm theo các tiêu chuẩn và chịu một phần trách nhiệm đối với kết quả công việc, sản phẩm của tổ, nhóm;", "Tổ chức bộ máy\\n...\\n5. Tổng cục Hải quan có thể biệt phái công chức từ các đơn vị thuộc và trực thuộc Tổng cục để bổ sung cán bộ chủ chốt, cán bộ kỹ thuật có năng lực, kinh nghiệm cho Ban Quản lý dự án đầu tư xây dựng chuyên ngành của Tổng cục Hải quan. Thời hạn biệt phái các công chức không quá 03 năm, trường hợp quá 03 năm mà chưa hoàn thành dự án thì Tổng cục Hải quan xem xét quyết định bổ sung thời gian biệt phái.\\nNhân sự tuyển dụng mới của Ban Quản lý dự án đầu tư xây dựng chuyên ngành của Tổng cục Hải quan là viên chức hoặc hợp đồng lao động, thực hiện theo quy định về chế độ tiền lương và các chế độ, chính sách đối với viên chức và người lao động.\\n...", "Biệt phái công chức\\n...\\n6. Không thực hiện biệt phái công chức nữ đang mang thai hoặc nuôi con dưới 36 tháng tuổi."]}, {"source_sentence": "Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức, viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?", "sentences": ["Nhiệm vụ của giáo viên\\n1. Thực hiện nhiệm vụ tổ chức các hoạt động dạy học, giáo dục theo kế hoạch giáo dục của nhà trường và kế hoạch giáo dục của tổ chuyên môn; quản lý học sinh trong các hoạt động giáo dục do nhà trường tổ chức; tham gia các hoạt động chuyên môn; chịu trách nhiệm về chất lượng, hiệu quả giáo dục.\\n2. Trau dồi đạo đức, nêu cao tinh thần trách nhiệm, giữ gìn phẩm chất, danh dự, uy tín của nhà giáo; gương mẫu trước học sinh; thương yêu, đối xử công bằng và tôn trọng nhân cách của học sinh; bảo vệ các quyền và lợi ích chính đáng của học sinh; đoàn kết, giúp đỡ đồng nghiệp.\\n3. Học tập, rèn luyện để nâng cao sức khỏe, trình độ chính trị, chuyên môn, nghiệp vụ, đổi mới phương pháp dạy học, giáo dục.\\n4. Tham gia tập huấn, bồi dưỡng chuyên môn, nghiệp vụ.\\n5. Tham gia công tác phổ cập giáo dục trung học cơ sở ở địa phương.\\n6. Thực hiện nghĩa vụ công dân, các quy định của pháp luật và của ngành Giáo dục, các quyết định của hiệu trưởng; thực hiện nhiệm vụ do hiệu trưởng phân công, chịu sự kiểm tra, đánh giá của hiệu trưởng và các cấp quản lý giáo dục.\\n7. Phối hợp với Đội Thiếu niên Tiền phong Hồ Chí Minh, Đoàn Thanh niên Cộng sản Hồ Chí Minh, Hội Liên hiệp Thanh niên Việt Nam, gia đình học sinh và các tổ chức xã hội liên quan để tổ chức hoạt động giáo dục.\\n8. Thực hiện các nhiệm vụ khác theo quy định của pháp luật.", "“Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \\n1. Máy PCR. \\n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \\n3. Test kít xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \\n4. Máy thở chức năng cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng cao, máy thở xách tay. \\n5. Máy lọc máu liên tục. \\n6. Máy X-Quang di động. \\n7. Máy đo khí máu (đo được điện giải, lactat, hematocrite). \\n8. Máy theo dõi bệnh nhân>5 thông số. \\n9. Bơm tiêm điện; Bơm truyền dịch. \\n10. Máy phá rung tim có tạo nhịp. \\n11. Máy đo thời gian đông máu. \\n12. Máy đo huyết động.”", "Thời điểm đánh giá xếp loại chất lượng hằng năm\\n...\\n2. Căn cứ tình hình thực tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\\n3. Tại thời điểm đánh giá, xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá, xếp loại chất lượng theo quy định của pháp luật và Quy chế này."]}]}
dataset
null
556
JunxiongWang/Llama3.2-Mamba2-3B-distill
JunxiongWang
null
[ "pytorch", "llama", "arxiv:2408.15237", "license:apache-2.0", "region:us" ]
2024-10-15T20:13:14Z
2024-11-17T21:08:55+00:00
168
0
--- license: apache-2.0 --- Zero-shot results when using the [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) as the teacher model, and the [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) as the initialized model | Model | [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) | [Llama3.2-Mamba-3B-distill](https://huggingface.co/JunxiongWang/Llama3.2-Mamba-3B-distill) | [Llama3.2-Mamba-3B-dpo](https://huggingface.co/JunxiongWang/Llama3.2-Mamba-3B-dpo) | [Llama3.2-Mamba2-3B-distill](https://huggingface.co/JunxiongWang/Llama3.2-Mamba2-3B-distill) | [Llama3.2-Mamba2-3B-dpo](https://huggingface.co/JunxiongWang/Llama3.2-Mamba2-3B-dpo) | |---------------|---------------------------------------------------------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------| | Initialization Model | N/A | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct | | Teacher Model | N/A | Llama-3.1-70B-Instruct | Llama-3.1-70B-Instruct | Llama-3.1-70B-Instruct | Llama-3.1-70B-Instruct | | arc_challenge | 0.459 | 0.4838 | 0.5265 | 0.4667 | 0.541 | | arc_easy | 0.7407 | 0.7765 | 0.7997 | 0.7668 | 0.8026 | | hellaswag | 0.7043 | 0.7037 | 0.7256 | 0.6913 | 0.7445 | | mmlu | 0.6043 | 0.5448 | 0.5509 | 0.5312 | 0.5247 | | openbookqa | 0.36 | 0.394 | 0.416 | 0.388 | 0.424 | | piqa | 0.7568 | 0.7731 | 0.7731 | 0.7601 | 0.7769 | | pubmedqa | 0.696 | 0.664 | 0.7 | 0.638 | 0.654 | | race | 0.4067 | 0.4029 | 0.4364 | 0.3981 | 0.4344 | | winogrande | 0.6748 | 0.6732 | 0.674 | 0.6606 | 0.6732 | | truthfulqa | 0.3801 | 0.4202 | 0.4853 | 0.3478 | 0.5028 | ``` @article{junxiongdaniele2024mambainllama, title = {The Mamba in the Llama: Distilling and Accelerating Hybrid Models}, author = {Junxiong Wang and Daniele Paliotta and Avner May and Alexander M. Rush and Tri Dao}, journal = {arXiv preprint arXiv:2408.15237}, year = {2024} } ```
[ "PUBMEDQA" ]
Non_BioNLP
Zero-shot results when using the [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) as the teacher model, and the [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) as the initialized model | Model | [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) | [Llama3.2-Mamba-3B-distill](https://huggingface.co/JunxiongWang/Llama3.2-Mamba-3B-distill) | [Llama3.2-Mamba-3B-dpo](https://huggingface.co/JunxiongWang/Llama3.2-Mamba-3B-dpo) | [Llama3.2-Mamba2-3B-distill](https://huggingface.co/JunxiongWang/Llama3.2-Mamba2-3B-distill) | [Llama3.2-Mamba2-3B-dpo](https://huggingface.co/JunxiongWang/Llama3.2-Mamba2-3B-dpo) | |---------------|---------------------------------------------------------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------| | Initialization Model | N/A | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct | | Teacher Model | N/A | Llama-3.1-70B-Instruct | Llama-3.1-70B-Instruct | Llama-3.1-70B-Instruct | Llama-3.1-70B-Instruct | | arc_challenge | 0.459 | 0.4838 | 0.5265 | 0.4667 | 0.541 | | arc_easy | 0.7407 | 0.7765 | 0.7997 | 0.7668 | 0.8026 | | hellaswag | 0.7043 | 0.7037 | 0.7256 | 0.6913 | 0.7445 | | mmlu | 0.6043 | 0.5448 | 0.5509 | 0.5312 | 0.5247 | | openbookqa | 0.36 | 0.394 | 0.416 | 0.388 | 0.424 | | piqa | 0.7568 | 0.7731 | 0.7731 | 0.7601 | 0.7769 | | pubmedqa | 0.696 | 0.664 | 0.7 | 0.638 | 0.654 | | race | 0.4067 | 0.4029 | 0.4364 | 0.3981 | 0.4344 | | winogrande | 0.6748 | 0.6732 | 0.674 | 0.6606 | 0.6732 | | truthfulqa | 0.3801 | 0.4202 | 0.4853 | 0.3478 | 0.5028 | ``` @article{junxiongdaniele2024mambainllama, title = {The Mamba in the Llama: Distilling and Accelerating Hybrid Models}, author = {Junxiong Wang and Daniele Paliotta and Avner May and Alexander M. Rush and Tri Dao}, journal = {arXiv preprint arXiv:2408.15237}, year = {2024} } ```
{"license": "apache-2.0"}
dataset
null
557
MonteXiaofeng/CareBot_Medical_multi-llama3-8b-instruct
MonteXiaofeng
null
[ "safetensors", "llama", "医疗对话模型", "中英文多语种医疗对话模型", "chatmodel", "dataset:BAAI/IndustryInstruction_Health-Medicine", "dataset:BAAI/IndustryInstruction", "base_model:MonteXiaofeng/CareBot_Medical_multi-llama3-8b-base", "base_model:finetune:MonteXiaofeng/CareBot_Medical_multi-llama3-8b-base", "license:apache-2.0", "region:us" ]
2024-09-29T03:24:25Z
2024-10-09T06:12:32+00:00
22
1
--- base_model: - MonteXiaofeng/CareBot_Medical_multi-llama3-8b-base datasets: - BAAI/IndustryInstruction_Health-Medicine - BAAI/IndustryInstruction license: apache-2.0 tags: - 医疗对话模型 - 中英文多语种医疗对话模型 - chatmodel --- This model is trained from the model: MonteXiaofeng/CareBot_Medical_multi-llama3-8b-base, training data is: BAAI/IndustryInstruction_Health-Medicine, To enhance the model's ability to follow medical instructions and better adapt to specific medical scenarios, we conduct the supervised fine-tuning. This process involves using conversational-style data (comprising both queries and responses) to finetune the pretrained LLM. In the following sections, we will explore the details of data construction and training methods. ## Data Construction Our SFT dataset comprises a diverse array of question types, including multiple-choice questions from medical exams, single-turn disease diagnoses, and multi-turn health consultations. It integrates data from seven publicly available sources: Chinese Medical Dialogue Data\footnote{https://github.com/Toyhom/Chinese-medical-dialogue-data}, Huatuo26M , MedDialog , ChatMed Consult Dataset , ChatDoctor , CMB\footnote{https://github.com/FreedomIntelligence/CMB}, and MedQA . We preserve portions of authentic doctor-patient conversations and augment the dataset by rewriting the remaining content. For these rewrites, we use real-world medical scenarios as prompts and generate responses via GPT-4. We believe this ensures the diversity of the SFT dataset, which can help the CareBot better adapt to different types of medical problems and patient situations, thereby improving its performance in a variety of scenarios. ## evaluation evaluation on benchmark is bellow. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/kqvLfcFtkw6lHcHtCySLr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/UiokfV8qcYEyCWEa__820.png) gsb result with other medical LLMS ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/rOnnIoY9MaXPTFD_R10r1.png)
[ "MEDDIALOG", "MEDQA" ]
BioNLP
This model is trained from the model: MonteXiaofeng/CareBot_Medical_multi-llama3-8b-base, training data is: BAAI/IndustryInstruction_Health-Medicine, To enhance the model's ability to follow medical instructions and better adapt to specific medical scenarios, we conduct the supervised fine-tuning. This process involves using conversational-style data (comprising both queries and responses) to finetune the pretrained LLM. In the following sections, we will explore the details of data construction and training methods. ## Data Construction Our SFT dataset comprises a diverse array of question types, including multiple-choice questions from medical exams, single-turn disease diagnoses, and multi-turn health consultations. It integrates data from seven publicly available sources: Chinese Medical Dialogue Data\footnote{https://github.com/Toyhom/Chinese-medical-dialogue-data}, Huatuo26M , MedDialog , ChatMed Consult Dataset , ChatDoctor , CMB\footnote{https://github.com/FreedomIntelligence/CMB}, and MedQA . We preserve portions of authentic doctor-patient conversations and augment the dataset by rewriting the remaining content. For these rewrites, we use real-world medical scenarios as prompts and generate responses via GPT-4. We believe this ensures the diversity of the SFT dataset, which can help the CareBot better adapt to different types of medical problems and patient situations, thereby improving its performance in a variety of scenarios. ## evaluation evaluation on benchmark is bellow. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/kqvLfcFtkw6lHcHtCySLr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/UiokfV8qcYEyCWEa__820.png) gsb result with other medical LLMS ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/rOnnIoY9MaXPTFD_R10r1.png)
{"base_model": ["MonteXiaofeng/CareBot_Medical_multi-llama3-8b-base"], "datasets": ["BAAI/IndustryInstruction_Health-Medicine", "BAAI/IndustryInstruction"], "license": "apache-2.0", "tags": ["医疗对话模型", "中英文多语种医疗对话模型", "chatmodel"]}
dataset
null
558
shivamgoel97/largefinetune
shivamgoel97
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:30000", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-01-13T18:02:31Z
2025-01-13T18:02:42+00:00
6
0
--- base_model: sentence-transformers/all-MiniLM-L6-v2 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:30000 - loss:CosineSimilarityLoss widget: - source_sentence: What error message appears if a customer attempts to enter a Custom Name and a non-VZW number? sentences: - 'High Risk SIM Swaps send an SMS and email notification to customers to let them know when a change is complete. Customers who receive these notifications, but did not make the change may contact CS to find out why they were notified. If customers believe that their account is compromised with a SIM takeover, follow these steps after authentication is complete:' - 'How information appears when a customer subscribes to Company Name ID based on how they have configured the lines on their accounts: Note: Must be a valid 10-digit NPA/NXX assigned number. Toll Free Number Restriction: If a customer sets their Custom Number Display to be a toll-free number, then calls made by MTNs using the custom number may not be accepted by other terminating toll-free numbers. The name also may not always appear. Company Name ID retrieves its data from various carrier databases, and not all databases support toll-free numbers. If the customer attempts to enter a Custom Name and a non-VZW number an error appears: "NPA/NXX is owned and managed by [Company]. To change the display name associated with this number, you need to contact [Company] and request they change the Caller ID name associated with it. What would you like to do next?" Customer is given 2 options; USE or USE a Different Number. If the customer selects USE, then the Custom Number applies and the Custom Name does not apply. The name displayed is the name associated with the Custom Number as listed in the Caller ID database. If the customer selects USE a Different Number, then the customer is given the opportunity to enter a different Custom Number. Example: XYZ AC & Heating wants to have their Company''s Customer Service number appear on Caller ID when their technicians call customers, but does not enter a Custom Name. They have configured the lines below with Company Name ID Custom Number. Enter Customer Number (VZW NPA/NXX) Default name appears as "Wireless Caller" 800-123-4567' - 'Broadband Portal Failed Activation Special Handling Instructions Lamborghini Customers do not have access to Inside Sales and must not be forwarded as with other OEMs. Broadband Portal is the only means for trial enrollment until the Internet channel is enabled (TBD 1Q22). The Lamborghini product is an embedded telematics device with Wi-Fi enabled vehicles. Customers can opt in for a free in-vehicle Wi-Fi trial. Lamborghini owners receive either 1 GB or 1 month, whichever comes first. Customers receive email alerts when they have 50% and 0% of their trial data remaining. When the trial period has ended or a customer has 0% of their data remaining, the customer receives an email stating that the trial has expired and asking to subscribe for continued service. Additional marketing-related emails to be delivered to customers on the following cadence: Day 1 (Welcome) Day 5 (Trial Engagement) Day 10 (Trial Reinforcement) Day 30 (Trial-to-Paid) Trial Enrollment Vehicle Compatibility The Lamborghini - 2022 Urus is compatible. To check the vehicle''s compatibility, advise customers to provide the vehicle''s IMEI (Device ID) number. To assist the customer in locating the IMEI (a 15-digit serial number located within the vehicle''s head unit HMI), see Connected Car Wi-Fi (Lamborghini) - View Device ID (IMEI). Connected Car Wi-Fi Hotspot: In-Vehicle Unit In the cars head unit (car''s electronic display), 2 categories of service options are available: Telematics (i.e., radio serviced, GPS/navigation, car service alerts) Wi-Fi connectivity Note: See the Knowledge Base (KB) articles to help guide the customer through set-up and connection from a wireless device to the Connected Car Wi-Fi Hotspot. Support Tools A Training Reference Guide and KB articles are available to assist with tier 1 support. Note: BGCO support can review the Split Data Routing KB for details regarding multi-part billing. KB Articles for Lamborghini Vehicles' - source_sentence: What additional responsibilities do Social Support Coordinators have besides handling cases? sentences: - 'Context: in the Device Information tile. 3 Click the number hyperlink in the Agreement # column. Result: The Device Payment History screen displays. 4 Within the MDN column, click the MDN hyperlink. Note: This source MDN is the number that you want to transfer from. 5 Select the desired MDN from the dropdown menu. Note: This is the MDN you want to move the agreement to. 6 Click Submit. Note: In the event an error is received, an AYS ticket can be submitted to escalate the issue. Name of the appliction: Automated Customer Support System Vzw Primary Issue: Device Payment IssueTable: <table border="1" cellpadding="5" style="font-family: nhg-text-roman , Arial , sans-serif;font-size: 14.0px;" width="100%"> <tbody><tr><td style="text-align: center;" valign="top" width="05%">1</td> <td width="95%"><p>Within ACSS, click the <b>Devices tab</b>.</p> </td> </tr><tr><td style="text-align: center;" valign="top" width="05%">2</td> <td width="95%">Click the <b>Device Payment</b> hyperlink in the <b>Device Information </b>tile. </td> </tr><tr><td style="text-align: center;" valign="top" width="05%">3</td> <td width="95%">Click the number hyperlink in the Agreement # column. Result: The <b>Device Payment History </b>screen displays.</td> </tr><tr><td style="text-align: center;" valign="top" width="05%">4</td> <td width="95%"><p>Within the MDN column, click the <b>MDN hyperlink</b>.<br/> Note: This source MDN is the number that you want to transfer from. </p> </td> </tr><tr><td style="text-align: center;" valign="top" width="05%">5</td> <td width="95%">Select the desired MDN from the dropdown menu.<br/> Note: This is the MDN you want to move the agreement to.</td> </tr><tr><td style="text-align: center;" valign="top" width="05%">6</td> <td width="95%"><p>Click <b>Submit</b>.<br/> <br/> Note: In the event an error is received, an <a href="https://atyourservice.verizon.com/ays?id=support">AYS ticket</a> can be submitted to escalate the issue.</p> <ul> <li>Name of the appliction: Automated Customer Support System Vzw</li> <li>Primary Issue: Device Payment Issue</li> </ul> </td> </tr></tbody></table>' - 'Context: verification process. This includes IDs that do or do not have an expiration date indicated on the ID. Refer to National Identification Requirement Matrix for a list of acceptable forms of ID. Driver''s license verification process: Note: Only PACT Leadership can make the exception to allow a driver''s license to be accepted without successful contact with the customer on an outbound call. An outbound call is required. If unable to reach the customer, leave a voicemail message informing of the port request. This must only be completed in an escalated situation or as an exception based on the customer''s circumstance.Table: <table border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><th style="text-align: center;" valign="top" width="5%">Step</th><th style="text-align: left;" valign="top">Action</th></tr><tr><td style="text-align: center;" valign="top" width="5%">1</td><td style="text-align: left;" valign="top"><p>Provide the Outbound call disclosure which includes:</p><ul><li>The representative''s name</li><li>Company and department</li><li>Reason for the call</li><li>Advise that the call is recorded for quality purposes<br/> </li></ul><p>Example: "This is (name) with Verizon''s port center calling about your transfer of the number ending in XXXX to Verizon. Is now a good time to review this request? ...great, well just to let you know, this call may be monitored or recorded for quality and training purposes."</p></td></tr><tr><td style="text-align: center;" valign="top" width="5%">2</td><td style="text-align: left;" valign="top"><p>Complete verification and ask for:</p><ul><li>The first and last name on the port request.<ul><li>Exception: When working a prepaid port and a valid name is not listed on the port request (e.g., Prepaid, No Name, etc.).</li></ul></li><li>The Ported Telephone Number (PTN).</li></ul></td></tr><tr><td style="text-align: center;" valign="top" width="5%">3</td><td style="text-align: left;" valign="top">Ask the customer if the PTN is the only number porting.</td></tr><tr><td style="text-align: center;" valign="top" width="5%">4</td><td><p>Remark the account.</p><p>Example: "Rec ticket, Customer information does not match called customer, TT Mary Smith/verified name on PR and PTN, updated account number, sent sup3, rec''d confirm. (Name)."</p></td></tr></tbody></table>' - Social Support Coordinators/Speclialists first assist peers with de-escalation support so the customer can continue to work with the original agent. When the customer requests to speak to a member of leadership, Social Support Coordinators/Specialists take over handling the case. For more information, refer to the Escalations section. - source_sentence: What is the procedure for handling garments with quality issues? sentences: - 'Html 2 Canvas Changing Ownership of an Account (SMART) Vote Up Vote Down Thank you. Your rating has been submitted. I did not find this page helpful because the content on the page: (check all the apply) X Does not have the information that I needed Had too much information Was confusing Was not well organized Was out-of-date Comments : 300 characters or less I found this page helpful because: X Comments : 300 characters or less When changing account ownership from the current Account Owner (AO) to an Account Manager (AM) on the account, the following rules apply: You must be on the current AO’s line to begin the process of transferring ownership. You can select any AMs’ line to transfer ownership to. That line will become the new AO line. The assuming AO’s ASC is required. You cannot transfer ownership to more than 1 line. Follow the steps below to change account ownership. Back to Top Confidential and proprietary material for Verizon Wireless personnel and Authorized Agents/Retailers only. Use, disclosure, or distribution of this material is not permitted to any unauthorized persons or third parties except by written agreement. © 2023 Verizon Wireless OST PageID 209429' - 'Context: audience is mature and pragmatic customers who use less talk, text and data. Refer to Tracfone for more information.For information on 30, 60 and 90 day plans, refer to Tracfone Service Plans. 30 Days of Service Plans For the details on the 30 days of service plans, refer to: 365 Days of Service Plans For the details on the 365 days of service plans, refer to:The following Care Contact options are available: Phone: Call 1-800-867-7183 for assistance. Online Chat: Go to Tracfone Contact Us and select Chat With Us. Text Helpline: Text 611611 to check balance, get refills and more.Table: <table border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><th style="text-align: left;" valign="top" width="25%">Cost</th><th style="text-align: left;" valign="top" width="25%">Talk/Text</th><th style="text-align: left;" valign="top" width="25%">Data</th><th style="text-align: left;" valign="top" width="25%">Hotspot</th></tr><tr><td style="text-align: left;" valign="top" width="25%">$199</td><td style="text-align: left;" valign="top" width="25%"><ul><li>Talk: Unlimited</li><li>Text: Unlimited</li></ul></td><td style="text-align: left;" valign="top" width="25%">24 GB</td><td style="text-align: left;" valign="top" width="25%">N/A</td></tr></tbody></table>' - '$35 Custom Unlimited Business Plan for Smartphones A $10 off promotional BIC is added and must be shown at checkout. Net price per month is $25. Price plan code is: 50400. Lines that are no longer under a line term: Purchase equipment through a device installment plan. Activate with Customer Provided Equipment (CPE) or purchase equipment at full retail price. Activate on, or change to, these plans receive a $10 credit that can be applied to the monthly access fee. Can be applied to new and existing lines. Updating existing lines requires a manual price plan change to reflect the discounted $25 price plan following the same $10 BIC process. If the promotion does not populate on the checkout screen, send orders to the Center of Excellence (COE) to process. Plan details include: Stackable with Quarterly Promotions (i.e., activation and port in BICs) Unlimited domestic data allowance with mobile hotspot Unlimited monthly anytime minutes-domestic, Canada and Mexico Domestic Canada and Mexico Long Distance Toll Free Unlimited Domestic messaging allowance Note: After 10 GB of data usage on a line during any billing cycle, usage may be prioritized behind other customers in the event of network congestion. This plan no longer starts off throttled or deprioritized. To ensure users are able to maximize their high speed data use for business applications, video applications stream at up to 480p. Mobile Hotspot is available on all capable devices and allows the line to share data allowance with multiple Wi-Fi enabled devices. If 10 GB of Mobile Hotspot data usage is exceeded on any line in any given billing cycle, Verizon limits the data throughput speeds to up to 200 Kbps for additional usage for the remainder of the then current billing cycle for the line that exceeds the data usage. For data usage in Canada and Mexico, after the first 512 MB of usage in a day, throughput speeds is reduced for the remainder of the day. This plan has no line caps (e.g., the plan can go beyond 99 lines). $45 Custom Business Unlimited Smartphone Plan Features include: $10 out of contract discount Net price per month of $35 (after out of contract discount is applied) 22 GB Premium Smartphone Data Unlimited Talk/Text Price plan code: 26524 Plan Details include: Stackable with Quarterly Promotions (i.e., activation and port in BICs). Unlimited domestic data allowance with mobile hotspot up to 10 GB before throttling speeds apply Unlimited Canada and Mexico messaging and data allowance Unlimited monthly anytime minutes-domestic, Canada and Mexico Domestic Canada and Mexico Long Distance Toll Free Unlimited Domestic and International messaging allowance Note: Usage on a line during any billing cycle may be prioritized behind other customers in the event of network congestion after 22 GB of data usage per line in the same billing cycle. For data usage in Canada and Mexico, after the first 512 MB of usage in a day, throughput speeds are reduced for the remainder of the day. Mobile Hotspot is available on all capable devices and allows the line to share data allowance with multiple Wi-Fi enabled devices. Hotspot usage exceeding 10 GB in the same monthly billing cycle is subject to throttled speeds. This plan has no line caps (e.g., the plan can go beyond 99 lines). $25 Custom Business Unlimited Mobile Broadband Data Plan for Tablets Plan details include: If 22 GB of domestic data usage is exceeded on any line in any given billing cycle, Verizon Wireless limits the data throughput speeds to up to 600 Kbps for additional usage for the remainder of the then current billing cycle for the line that exceeds the data usage. Price Plan Code: 61893.' - source_sentence: What should be done if the customer does not answer the initial outbound call during the verification process? sentences: - 'Context: payment plan) The survey cannot be reassigned if the BGCO Representative refuses to take a transfer calls based on the SPG valid transfer reason listed. When cold transfer takes place, the survey assignment follows the last representative that handled the call. Use the Advanced Tech Support CTI option to reach Global Tech Support.Partner Representative Note: Do not refer escalated customers to the Contact Us option on vzw.com or correspondence address on the bill. Partner Supervisor/Coach Note: If a call escalates beyond a Supervisor or Coach, another Supervisor/Coach is not the next level of support. Partner Operations Manager Verizon Site ManagerTable: <table border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><th style="text-align: center;" valign="top" width="5%">Step</th><th style="text-align: left;" valign="top">Action</th></tr><tr><td style="text-align: center;" valign="top" width="5%">1</td><td style="text-align: left;" valign="top"><p>Attempt to resolve the customers problem using tools and resources (when required).</p><p>Note: Use <a href="/content/km/categories/account-support/203254.html" target="_blank">Suggested Scripting to Prevent Escalation</a> to assist customers.</p></td></tr><tr><td style="text-align: center;" valign="top" width="5%">2</td><td style="text-align: left;" valign="top">Remark the account with the escalation reason and any offers provided to the customer.</td></tr><tr><td style="text-align: center;" valign="top" width="5%">3</td><td style="text-align: left;" valign="top">Engage a Supervisor when the customer requests to speak to a Supervisor (after the de-escalation attempt).</td></tr></tbody></table>' - 'The ShatterShield™ display system is made of 3 separate components: Display (Warranted against shattering and cracking for 4 years from the original date of consumer purchase.) Embedded lens (Warranted against shattering and cracking for 4 years from the original date of consumer purchase.) Protective lens (Consumer-replaceable lens is not covered by the limited warranty. However, it should always be in place to prevent scratches and other damage to the underlying components.) Motorola Moto ShatterShield program details:' - 'Context: verification process. This includes IDs that do or do not have an expiration date indicated on the ID. Refer to National Identification Requirement Matrix for a list of acceptable forms of ID. Driver''s license verification process: Note: Only PACT Leadership can make the exception to allow a driver''s license to be accepted without successful contact with the customer on an outbound call. An outbound call is required. If unable to reach the customer, leave a voicemail message informing of the port request. This must only be completed in an escalated situation or as an exception based on the customer''s circumstance.Table: <table border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><th style="text-align: center;" valign="top" width="5%">Step</th><th style="text-align: left;" valign="top">Action</th></tr><tr><td style="text-align: center;" valign="top" width="5%">1</td><td style="text-align: left;" valign="top">Verify the MDN being ported.</td></tr><tr><td style="text-align: center;" valign="top" width="5%">2</td><td style="text-align: left;" valign="top"><p>Ensure the following information on the MDN being ported out:</p><ul><li>There has not been a SIM swap in the past 24 hours.</li><li>Call Forwarding is not turned on.<ul><li>To check call forwarding in ACSS, select <b>Devices </b>(located on the top navigation bar) &gt; <b>Subscriber</b>.<ul><li>The numbers display if call forwarding is turned on.</li></ul></li></ul></li></ul></td></tr><tr><td style="text-align: center;" valign="top" width="5%">3</td><td style="text-align: left;" valign="top">Call the MDN being ported out and complete 1 of the following according to the outcome:<ul><li>If the customer answers, continue to Step 4.</li><li>If the customer does not answer, attempt a second call to the Can Be Reached (CBR) number.<ul><li>If there is no CBR, call an alternate MDN on the account.</li><li>If the customer does not answer, proceed to Step 6.</li></ul></li><li>If all lines on the account are suspended for non-pay or non-voice capable devices (e.g., Jetpack, smartwatch and tablet), only call the CBR on the account.</li></ul><p>Note:  Do not advise the customer that an outbound call is being made for verification. Inform the customer of a brief hold while reviewing the transaction.</p></td></tr><tr><td style="text-align: center;" valign="top" width="5%">4</td><td style="text-align: left;" valign="top">Verify the first and last name of the Account Owner or Account Manager.  </td></tr><tr><td style="text-align: center;" valign="top" width="5%">5</td><td style="text-align: left;" valign="top"><p>Confirm with the person answering the outbound call that the customer is porting out from Verizon.</p><p>Note: If the customer is not porting out:</p><ul><li>Assist the customer with updating the API to <b>F</b>.</li><li>Advise the caller that Verizon is unable to complete secondary verification and refer the customer to the Fraud Prevention Team at 888-483-7200.</li><li>Do not continue with secondary verification.</li></ul></td></tr><tr><td style="text-align: center;" valign="top" width="5%">6</td><td style="text-align: left;" valign="top"><p>Check remarks to ensure a My Verizon password and/or Account PIN reset was not completed within the past 24 hours. </p><p>Note: </p><ul><li>If a password / PIN change was completed in the past 24 hours, direct the customer appropriately. </li><li>If the customer can only receive email for step-up verification, direct the customer to My Verizon after 24 hours expires. <ul><li>Do not use email in this situation.</li></ul></li><li>If the customer cannot wait 24 hours, the customer can visit a Verizon store where the Number Transfer PIN (NTP) can be generated.<ul><li>The port out process is not stopped while waiting for the NTP.</li><li>Once the customer receives the NTP, the customer can provide the NTP to the new carrier to complete the process.</li></ul></li></ul></td></tr><tr><td style="text-align: center;" valign="top" width="5%">7</td><td style="text-align: left;" valign="top"><p>Verify the ACSS PIN (if an Account PIN exists or was established). </p><p>Note: If there is not an established Account PIN, contact CS to assist the customer to establish the Account PIN. </p></td></tr><tr><td style="text-align: center;" valign="top" width="5%">8</td><td style="text-align: left;" valign="top"><p>Verify the CBR number.</p><p>Note: </p><ul><li>This step is only applicable when a valid CBR is listed.</li><li>This step is not required when:<ul><li>The CBR is the Ported Telephone Number (PTN) and the customer has provided the full PTN. </li><li>There is no valid CBR listed on the account (e.g., 999-999-9999).</li></ul></li></ul></td></tr><tr><td style="text-align: center;" valign="top" width="5%">9</td><td style="text-align: left;" valign="top"><p>Ask for the exact amount of the last bill. </p><p>Note: Local Number Portability (LNP) accepts the last payment amount. </p></td></tr><tr><td style="text-align: center;" valign="top" width="5%">10</td><td style="text-align: left;" valign="top"><p>Verify the following:</p><ul><li>The device IMEI or Integrated Circuit Card ID (ICCID) (e.g., SIM card) of the line being ported out from the account.</li><li>The make and model of the device.</li></ul></td></tr><tr><td style="text-align: center;" valign="top" width="5%">11</td><td style="text-align: left;" valign="top">Provide the customer with the account number once verification is complete.</td></tr></tbody></table>' - source_sentence: What information is required when calling CMD to report an issue? sentences: - 'Assist customers who are experiencing trouble adding the Verizon Connections discount to their account. This enhancement is for Verizon Connections and does not include the Veterans Advantage Affinity Program. Add the token to New Installs, Moves, Change, or Pending orders. Adding VZ Connection Token in Optix Select VZ Connections in the dropdown in the Coupons, Tokens and Discounts section of One Page Ordering (OPO). Enter the 12-digit alphanumeric token provided by the customer. The token is specific to the customer and can only be provided by them. It is not in Optix. Click Verify Customer Coupon to validate the token. Click Apply to add the discount to the customer''s account. The Verizon Connections discount is presented on the Agreement + Offer section tile: Click Review Order. The discount appears on the Review Order page:' - 'LTE Internet (Installed) customers automatically get 50% more data through the first 2 full billing cycles. The extra allowance is intended to temporarily help provide overage protection. It gives customers an opportunity to track how much data they use on a monthly basis without the penalty of incurring any overage charges (within the 50% allowance). At the end of a customer''s first 2 months of service, they receive an email recommending a plan change if they went over their allowance. An example of the 10 GB plan: Customer automatically receives the 50% more data allowance and gets a 50% data bonus on their prorated billing cycle at activation. Customer activates a LTE Internet (Installed) 10 GB per month plan on day 15 of the month. Bill cycle is on day 20. Customer gets full 10 GB plus 5 GB for the first 5 days (prorated). The monthly allowance may be prorated if the customer: Upgrades LTE Internet (Installed) equipment (Antenna). Changes plans. Changes any features. The extra data allowance is never prorated. Customer gets 10 GB plus 5 GB for the first prorated bill cycle, and the next 2 successive bill cycles. The 50% more data allowance appears on the customer’s bills, it does not display in ACSS. Benefits: Customers are able to track their data usage through My Verizon. ACSS and data usage alerts do not reflect the 50% more data allowance. Data usage alerts do not reflect the 50% bonus data and trigger based on the customer''s plan allowance. The email address registered on the account receives alerts at 50%, 75%, 90% and 100% of usage of the plan (BAU) and at 90% of each GB used over the allowance. Customers looking to avoid unexpected or high overage charges can add SmartFamily controls to an LTE Internet (Installed) Broadband account. This allows them to set hard stop limits on monthly data usage. Note: Customers with a LTE Internet (Installed) line of service on The Verizon Plan are not eligible for the 50% More Data Allowance.' - 'Context: is due. To dismiss the alert when it is Due Now, click the MR ticket number and access the ticket.Once a scheduled follow up has arrived, notifications display as a reminder to handle the follow up.Alert: Do not place confidential, CPI or internal comments in the message box as these notes are viewable by the customer. Statusing Follow Ups Follow Up What If ScenariosManages Resolution (MR) Follow Up (automated) enhancements give customers more information and control during the process. After a ticket is submitted, review below to see what the customer sees and how the customer can manage their ticket.Table: <table border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><td style="text-align: left;" valign="top" width="35%"><b>If ...</b></td><td style="text-align: left;" valign="top" width="65%"><b>Then ...</b></td></tr><tr><td style="text-align: left;" valign="top" width="35%">agent misses the scheduled window for contact</td><td style="text-align: left;" valign="top" width="65%">agent attempts to contact. If customer does not answer, request to reschedule and do so when customer responds. </td></tr><tr><td style="text-align: left;" valign="top" width="35%">the agent is out sick or on vacation</td><td style="text-align: left;" valign="top" width="65%">supervisors reassign the follow up to ensure the customer receives the call back.</td></tr><tr><td style="text-align: left;" valign="top" width="35%">the customer modifies the scheduled call back</td><td style="text-align: left;" valign="top" width="65%">customer selects a new time based on the agent''s availability through My Verizon app. The follow up system automatically updates the time on the agent''s end and provides reminders the day of the newly selected time. </td></tr><tr><td style="text-align: left;" valign="top" width="35%">the customer cannot be reached</td><td style="text-align: left;" valign="top" width="65%"><p>the agent completes the following:</p><ol><li>Leave a detailed voicemail.</li><li>Extend the follow up for the next business day. </li><li>Select a reason for the reschedule:<ul><li>No answer</li><li>Voicemail</li><li>Line busy</li><li>Note: Selecting 1 of these options places the ticket in Pending-Need more information status. </li></ul></li><li>Update the Follow Up Status accordingly.</li></ol></td></tr><tr><td style="text-align: left;" valign="top" width="35%">there is a need to make changes to a follow up without agreement from the customer</td><td style="text-align: left;" valign="top" width="65%"><p>do not do without customer approval. </p><p>Note: The Managed Resolution (MR) Follow Up transformation offers customers the experience to receive updates on their follow ups and receive updates any time there is a change. Customer may call back if it is not a change the customer requested (e.g., changing MDN, follow up method, etc.)</p></td></tr><tr><td style="text-align: left;" valign="top">there is a need to make a change to a follow up and waiting for the customer to respond</td><td style="text-align: left;" valign="top"><p>select <b>Need more information</b>. </p><p>Result: The customer receives a 24 hour and 72 hour reminder. If the customer does not respond within 7 days, the follow up auto-resolves.</p></td></tr><tr><td style="text-align: left;" valign="top">waiting for NRB ticket or another resolution prior to contacting the customer</td><td style="text-align: left;" valign="top"><p>select <b>Need more time, still reviewing and processing</b>.</p><p>Agent types an update message to the customer that is sent through SMS or email to alert the customer of the follow up status.</p></td></tr></tbody></table>' --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("shivamgoel97/largefinetune") # Run inference sentences = [ 'What information is required when calling CMD to report an issue?', 'Context: is due. To dismiss the alert when it is Due Now, click the MR ticket number and access the ticket.Once a scheduled follow up has arrived, notifications display as a reminder to handle the follow up.Alert: Do not place confidential, CPI or internal comments in the message box as these notes are viewable by the customer. Statusing Follow Ups Follow Up What If ScenariosManages Resolution (MR) Follow Up (automated) enhancements give customers more information and control during the process. After a ticket is submitted, review below to see what the customer sees and how the customer can manage their ticket.Table: <table border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><td style="text-align: left;" valign="top" width="35%"><b>If ...</b></td><td style="text-align: left;" valign="top" width="65%"><b>Then ...</b></td></tr><tr><td style="text-align: left;" valign="top" width="35%">agent misses the scheduled window for contact</td><td style="text-align: left;" valign="top" width="65%">agent attempts to contact. If customer does not answer, request to reschedule and do so when customer responds.\xa0</td></tr><tr><td style="text-align: left;" valign="top" width="35%">the agent is out sick or on vacation</td><td style="text-align: left;" valign="top" width="65%">supervisors reassign the follow up to ensure the customer receives the call back.</td></tr><tr><td style="text-align: left;" valign="top" width="35%">the customer modifies the scheduled call back</td><td style="text-align: left;" valign="top" width="65%">customer selects a new time based on the agent\'s availability through My Verizon app. The follow up system automatically updates the time on the agent\'s end and provides reminders the day of the newly selected time.\xa0</td></tr><tr><td style="text-align: left;" valign="top" width="35%">the customer cannot be reached</td><td style="text-align: left;" valign="top" width="65%"><p>the agent completes the following:</p><ol><li>Leave a detailed voicemail.</li><li>Extend the follow up for the next business day.\xa0</li><li>Select a reason for the reschedule:<ul><li>No answer</li><li>Voicemail</li><li>Line busy</li><li>Note: Selecting 1 of these options places the ticket in Pending-Need more information status.\xa0</li></ul></li><li>Update the Follow Up Status accordingly.</li></ol></td></tr><tr><td style="text-align: left;" valign="top" width="35%">there is a need to make changes to a follow up without agreement from the customer</td><td style="text-align: left;" valign="top" width="65%"><p>do not do without customer approval.\xa0</p><p>Note: The Managed Resolution (MR) Follow Up transformation offers customers the experience to receive updates on their follow ups and receive updates any time there is a change. Customer may call back if it is not a change the customer requested (e.g., changing MDN, follow up method, etc.)</p></td></tr><tr><td style="text-align: left;" valign="top">there is a need to make a change to a follow up and waiting for the customer to respond</td><td style="text-align: left;" valign="top"><p>select <b>Need more information</b>.\xa0</p><p>Result: The customer receives a 24 hour and 72 hour reminder. If the customer does not respond within 7 days, the follow up auto-resolves.</p></td></tr><tr><td style="text-align: left;" valign="top">waiting for NRB ticket or another resolution prior to contacting the customer</td><td style="text-align: left;" valign="top"><p>select <b>Need more time, still reviewing and processing</b>.</p><p>Agent types an update message to the customer that is sent through SMS or email to alert the customer of the follow up status.</p></td></tr></tbody></table>', "Assist customers who are experiencing trouble adding the Verizon Connections discount to their account. This enhancement is for Verizon Connections and does not include the Veterans Advantage Affinity Program. Add the token to New Installs, Moves, Change, or Pending orders. Adding VZ Connection Token in Optix Select VZ Connections in the dropdown in the Coupons, Tokens and Discounts section of One Page Ordering (OPO). Enter the 12-digit alphanumeric token provided by the customer. The token is specific to the customer and can only be provided by them. It is not in Optix. Click Verify Customer Coupon to validate the token. Click Apply to add the discount to the customer's account. The Verizon Connections discount is presented on the Agreement + Offer section tile: Click Review Order. The discount appears on the Review Order page:", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 30,000 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 9 tokens</li><li>mean: 18.77 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 218.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>0: ~32.80%</li><li>1: ~67.20%</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:-------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Who is exempt from the rules set by the Truth in Caller ID Act?</code> | <code>The Truth in Caller ID Act The FCC adopted rules implementing the Truth in Caller ID Act. The FCC rules: Prohibit any person or entity from transmitting misleading or inaccurate caller ID information with the intent to defraud, cause harm or wrongfully get anything of value. Subject violators to a penalty of up to $10,000 for each violation of the rules. Exempt authorized activities by law enforcement agencies and situations where courts have authorized caller ID manipulation to occur. Need additional assistance with walking the customer through their experience? Access the IG Fraud Response tool for helpful scenarios and responses. Verizon wireless also offers Caller Name ID; which can be downloaded on user's device through the APP Store (preloaded on Android devices). Caller ID has Spam Management functionality. See Call Filter for additional details (monthly fee applies). Refer the customer to the FCC Spoofing and Caller ID website for additional information. There is a short video ...</code> | <code>1</code> | | <code>What additional tools does the paid version of the call filtering feature provide?</code> | <code>Sell it in 5: Call Filter (free) helps companies and agencies increase productivity and take control of their calls by identifying and routing any spam, fraudulent, or illegal robocalls to voicemail. Call Filter Plus (paid version) provides additional tools for user control such as manually blocking and unblocking numbers; blocking calls based on risk levels; and adding a name to unknown numbers in order to minimize impact to a business or agency. Sales Point of Contacts (SPOC's) may order or block the free and paid versions of the feature for their employees by customizing the feature block on their applicable lines. Business employees may order Call Filter (free) and engage their company SPOC to order Call Filter Plus (paid version through contract amendment). Public Sector employees engage their SPOC to order Call Filter or Call Filter Plus through a contract amendment.</code> | <code>1</code> | | <code>What features are included with the Unlimited storage option that are not available with the 600 GB option?</code> | <code>Context: Connection 2.0 will not receive this trial period since there is no cost. Customer communications and setup flows will not mention the "30 days on us" when 2TB is included in Gigabit Connection 2.0. Current In-Market 500 GB: $5/month (available for prepaid customers only) 600 GB: $5.99/month Effective 9/15/20, the 500 GB Verizon Cloud storage option was increased to 600 GB for $5.99. Customers who had the 500 GB storage option accounts were automatically increased to 600 GB of storage for $5.99/month. A notification about this change was sent to the account owner through their Verizon account. 1 TB: $9.99/monthTable: <table border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><td><b><br/> Offers as of 12/2/20<br/></b>    </td><td><b>Unlimited<br/> $19.99/month</b></td><td><b>2 TB<br/> $14.99/month</b></td><td><b>600 GB<br/> $5.99/month</b></td></tr><tr><td>Users and devices</td><td>Up to 5 users<br/>Unlimited devices</td><td>Up to 5 users<br/>Unlimited devices<br...</code> | <code>1</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:-----:|:-------------:| | 0.1333 | 500 | 0.179 | | 0.2667 | 1000 | 0.1591 | | 0.4 | 1500 | 0.1492 | | 0.5333 | 2000 | 0.1454 | | 0.6667 | 2500 | 0.1422 | | 0.8 | 3000 | 0.1404 | | 0.9333 | 3500 | 0.1354 | | 1.0667 | 4000 | 0.1288 | | 1.2 | 4500 | 0.1256 | | 1.3333 | 5000 | 0.1234 | | 1.4667 | 5500 | 0.1208 | | 1.6 | 6000 | 0.1215 | | 1.7333 | 6500 | 0.123 | | 1.8667 | 7000 | 0.1231 | | 2.0 | 7500 | 0.1181 | | 2.1333 | 8000 | 0.1061 | | 2.2667 | 8500 | 0.1066 | | 2.4 | 9000 | 0.1093 | | 2.5333 | 9500 | 0.105 | | 2.6667 | 10000 | 0.1088 | | 2.8 | 10500 | 0.1053 | | 2.9333 | 11000 | 0.1062 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu121 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "CPI" ]
Non_BioNLP
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("shivamgoel97/largefinetune") # Run inference sentences = [ 'What information is required when calling CMD to report an issue?', 'Context: is due. To dismiss the alert when it is Due Now, click the MR ticket number and access the ticket.Once a scheduled follow up has arrived, notifications display as a reminder to handle the follow up.Alert: Do not place confidential, CPI or internal comments in the message box as these notes are viewable by the customer. Statusing Follow Ups Follow Up What If ScenariosManages Resolution (MR) Follow Up (automated) enhancements give customers more information and control during the process. After a ticket is submitted, review below to see what the customer sees and how the customer can manage their ticket.Table: <table border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><td style="text-align: left;" valign="top" width="35%"><b>If ...</b></td><td style="text-align: left;" valign="top" width="65%"><b>Then ...</b></td></tr><tr><td style="text-align: left;" valign="top" width="35%">agent misses the scheduled window for contact</td><td style="text-align: left;" valign="top" width="65%">agent attempts to contact. If customer does not answer, request to reschedule and do so when customer responds.\xa0</td></tr><tr><td style="text-align: left;" valign="top" width="35%">the agent is out sick or on vacation</td><td style="text-align: left;" valign="top" width="65%">supervisors reassign the follow up to ensure the customer receives the call back.</td></tr><tr><td style="text-align: left;" valign="top" width="35%">the customer modifies the scheduled call back</td><td style="text-align: left;" valign="top" width="65%">customer selects a new time based on the agent\'s availability through My Verizon app. The follow up system automatically updates the time on the agent\'s end and provides reminders the day of the newly selected time.\xa0</td></tr><tr><td style="text-align: left;" valign="top" width="35%">the customer cannot be reached</td><td style="text-align: left;" valign="top" width="65%"><p>the agent completes the following:</p><ol><li>Leave a detailed voicemail.</li><li>Extend the follow up for the next business day.\xa0</li><li>Select a reason for the reschedule:<ul><li>No answer</li><li>Voicemail</li><li>Line busy</li><li>Note: Selecting 1 of these options places the ticket in Pending-Need more information status.\xa0</li></ul></li><li>Update the Follow Up Status accordingly.</li></ol></td></tr><tr><td style="text-align: left;" valign="top" width="35%">there is a need to make changes to a follow up without agreement from the customer</td><td style="text-align: left;" valign="top" width="65%"><p>do not do without customer approval.\xa0</p><p>Note: The Managed Resolution (MR) Follow Up transformation offers customers the experience to receive updates on their follow ups and receive updates any time there is a change. Customer may call back if it is not a change the customer requested (e.g., changing MDN, follow up method, etc.)</p></td></tr><tr><td style="text-align: left;" valign="top">there is a need to make a change to a follow up and waiting for the customer to respond</td><td style="text-align: left;" valign="top"><p>select <b>Need more information</b>.\xa0</p><p>Result: The customer receives a 24 hour and 72 hour reminder. If the customer does not respond within 7 days, the follow up auto-resolves.</p></td></tr><tr><td style="text-align: left;" valign="top">waiting for NRB ticket or another resolution prior to contacting the customer</td><td style="text-align: left;" valign="top"><p>select <b>Need more time, still reviewing and processing</b>.</p><p>Agent types an update message to the customer that is sent through SMS or email to alert the customer of the follow up status.</p></td></tr></tbody></table>', "Assist customers who are experiencing trouble adding the Verizon Connections discount to their account. This enhancement is for Verizon Connections and does not include the Veterans Advantage Affinity Program. Add the token to New Installs, Moves, Change, or Pending orders. Adding VZ Connection Token in Optix Select VZ Connections in the dropdown in the Coupons, Tokens and Discounts section of One Page Ordering (OPO). Enter the 12-digit alphanumeric token provided by the customer. The token is specific to the customer and can only be provided by them. It is not in Optix. Click Verify Customer Coupon to validate the token. Click Apply to add the discount to the customer's account. The Verizon Connections discount is presented on the Agreement + Offer section tile: Click Review Order. The discount appears on the Review Order page:", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 30,000 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 9 tokens</li><li>mean: 18.77 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 218.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>0: ~32.80%</li><li>1: ~67.20%</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:-------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Who is exempt from the rules set by the Truth in Caller ID Act?</code> | <code>The Truth in Caller ID Act The FCC adopted rules implementing the Truth in Caller ID Act. The FCC rules: Prohibit any person or entity from transmitting misleading or inaccurate caller ID information with the intent to defraud, cause harm or wrongfully get anything of value. Subject violators to a penalty of up to $10,000 for each violation of the rules. Exempt authorized activities by law enforcement agencies and situations where courts have authorized caller ID manipulation to occur. Need additional assistance with walking the customer through their experience? Access the IG Fraud Response tool for helpful scenarios and responses. Verizon wireless also offers Caller Name ID; which can be downloaded on user's device through the APP Store (preloaded on Android devices). Caller ID has Spam Management functionality. See Call Filter for additional details (monthly fee applies). Refer the customer to the FCC Spoofing and Caller ID website for additional information. There is a short video ...</code> | <code>1</code> | | <code>What additional tools does the paid version of the call filtering feature provide?</code> | <code>Sell it in 5: Call Filter (free) helps companies and agencies increase productivity and take control of their calls by identifying and routing any spam, fraudulent, or illegal robocalls to voicemail. Call Filter Plus (paid version) provides additional tools for user control such as manually blocking and unblocking numbers; blocking calls based on risk levels; and adding a name to unknown numbers in order to minimize impact to a business or agency. Sales Point of Contacts (SPOC's) may order or block the free and paid versions of the feature for their employees by customizing the feature block on their applicable lines. Business employees may order Call Filter (free) and engage their company SPOC to order Call Filter Plus (paid version through contract amendment). Public Sector employees engage their SPOC to order Call Filter or Call Filter Plus through a contract amendment.</code> | <code>1</code> | | <code>What features are included with the Unlimited storage option that are not available with the 600 GB option?</code> | <code>Context: Connection 2.0 will not receive this trial period since there is no cost. Customer communications and setup flows will not mention the "30 days on us" when 2TB is included in Gigabit Connection 2.0. Current In-Market 500 GB: $5/month (available for prepaid customers only) 600 GB: $5.99/month Effective 9/15/20, the 500 GB Verizon Cloud storage option was increased to 600 GB for $5.99. Customers who had the 500 GB storage option accounts were automatically increased to 600 GB of storage for $5.99/month. A notification about this change was sent to the account owner through their Verizon account. 1 TB: $9.99/monthTable: <table border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><td><b><br/> Offers as of 12/2/20<br/></b>    </td><td><b>Unlimited<br/> $19.99/month</b></td><td><b>2 TB<br/> $14.99/month</b></td><td><b>600 GB<br/> $5.99/month</b></td></tr><tr><td>Users and devices</td><td>Up to 5 users<br/>Unlimited devices</td><td>Up to 5 users<br/>Unlimited devices<br...</code> | <code>1</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:-----:|:-------------:| | 0.1333 | 500 | 0.179 | | 0.2667 | 1000 | 0.1591 | | 0.4 | 1500 | 0.1492 | | 0.5333 | 2000 | 0.1454 | | 0.6667 | 2500 | 0.1422 | | 0.8 | 3000 | 0.1404 | | 0.9333 | 3500 | 0.1354 | | 1.0667 | 4000 | 0.1288 | | 1.2 | 4500 | 0.1256 | | 1.3333 | 5000 | 0.1234 | | 1.4667 | 5500 | 0.1208 | | 1.6 | 6000 | 0.1215 | | 1.7333 | 6500 | 0.123 | | 1.8667 | 7000 | 0.1231 | | 2.0 | 7500 | 0.1181 | | 2.1333 | 8000 | 0.1061 | | 2.2667 | 8500 | 0.1066 | | 2.4 | 9000 | 0.1093 | | 2.5333 | 9500 | 0.105 | | 2.6667 | 10000 | 0.1088 | | 2.8 | 10500 | 0.1053 | | 2.9333 | 11000 | 0.1062 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu121 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:30000", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "What error message appears if a customer attempts to enter a Custom Name and a non-VZW number?", "sentences": ["High Risk SIM Swaps send an SMS and email notification to customers to let them know when a change is complete. Customers who receive these notifications, but did not make the change may contact CS to find out why they were notified. If customers believe that their account is compromised with a SIM takeover, follow these steps after authentication is complete:", "How information appears when a customer subscribes to Company Name ID based on how they have configured the lines on their accounts: Note: Must be a valid 10-digit NPA/NXX assigned number. Toll Free Number Restriction: If a customer sets their Custom Number Display to be a toll-free number, then calls made by MTNs using the custom number may not be accepted by other terminating toll-free numbers. The name also may not always appear. Company Name ID retrieves its data from various carrier databases, and not all databases support toll-free numbers. If the customer attempts to enter a Custom Name and a non-VZW number an error appears: \"NPA/NXX is owned and managed by [Company]. To change the display name associated with this number, you need to contact [Company] and request they change the Caller ID name associated with it. What would you like to do next?\" Customer is given 2 options; USE or USE a Different Number. If the customer selects USE, then the Custom Number applies and the Custom Name does not apply. The name displayed is the name associated with the Custom Number as listed in the Caller ID database. If the customer selects USE a Different Number, then the customer is given the opportunity to enter a different Custom Number. Example: XYZ AC & Heating wants to have their Company's Customer Service number appear on Caller ID when their technicians call customers, but does not enter a Custom Name. They have configured the lines below with Company Name ID Custom Number. Enter Customer Number (VZW NPA/NXX) Default name appears as \"Wireless Caller\" 800-123-4567", "Broadband Portal Failed Activation Special Handling Instructions Lamborghini Customers do not have access to Inside Sales and must not be forwarded as with other OEMs. Broadband Portal is the only means for trial enrollment until the Internet channel is enabled (TBD 1Q22). The Lamborghini product is an embedded telematics device with Wi-Fi enabled vehicles. Customers can opt in for a free in-vehicle Wi-Fi trial. Lamborghini owners receive either 1 GB or 1 month, whichever comes first. Customers receive email alerts when they have 50% and 0% of their trial data remaining. When the trial period has ended or a customer has 0% of their data remaining, the customer receives an email stating that the trial has expired and asking to subscribe for continued service. Additional marketing-related emails to be delivered to customers on the following cadence: Day 1 (Welcome) Day 5 (Trial Engagement) Day 10 (Trial Reinforcement) Day 30 (Trial-to-Paid) Trial Enrollment Vehicle Compatibility The Lamborghini - 2022 Urus is compatible. To check the vehicle's compatibility, advise customers to provide the vehicle's IMEI (Device ID) number. To assist the customer in locating the IMEI (a 15-digit serial number located within the vehicle's head unit HMI), see Connected Car Wi-Fi (Lamborghini) - View Device ID (IMEI). Connected Car Wi-Fi Hotspot: In-Vehicle Unit In the cars head unit (car's electronic display), 2 categories of service options are available: Telematics (i.e., radio serviced, GPS/navigation, car service alerts) Wi-Fi connectivity Note: See the Knowledge Base (KB) articles to help guide the customer through set-up and connection from a wireless device to the Connected Car Wi-Fi Hotspot. Support Tools A Training Reference Guide and KB articles are available to assist with tier 1 support. Note: BGCO support can review the Split Data Routing KB for details regarding multi-part billing. KB Articles for Lamborghini Vehicles"]}, {"source_sentence": "What additional responsibilities do Social Support Coordinators have besides handling cases?", "sentences": ["Context: in the Device Information tile. 3 Click the number hyperlink in the Agreement # column. Result: The Device Payment History screen displays. 4 Within the MDN column, click the MDN hyperlink. Note: This source MDN is the number that you want to transfer from. 5 Select the desired MDN from the dropdown menu. Note: This is the MDN you want to move the agreement to. 6 Click Submit. Note: In the event an error is received, an AYS ticket can be submitted to escalate the issue. Name of the appliction: Automated Customer Support System Vzw Primary Issue: Device Payment IssueTable: <table border=\"1\" cellpadding=\"5\" style=\"font-family: nhg-text-roman , Arial , sans-serif;font-size: 14.0px;\" width=\"100%\"> <tbody><tr><td style=\"text-align: center;\" valign=\"top\" width=\"05%\">1</td> <td width=\"95%\"><p>Within ACSS, click the <b>Devices tab</b>.</p> </td> </tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"05%\">2</td> <td width=\"95%\">Click the <b>Device Payment</b> hyperlink in the <b>Device Information </b>tile. </td> </tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"05%\">3</td> <td width=\"95%\">Click the number hyperlink in the Agreement # column. Result: The <b>Device Payment History </b>screen displays.</td> </tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"05%\">4</td> <td width=\"95%\"><p>Within the MDN column, click the <b>MDN hyperlink</b>.<br/> Note: This source MDN is the number that you want to transfer from. </p> </td> </tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"05%\">5</td> <td width=\"95%\">Select the desired MDN from the dropdown menu.<br/> Note: This is the MDN you want to move the agreement to.</td> </tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"05%\">6</td> <td width=\"95%\"><p>Click <b>Submit</b>.<br/> <br/> Note: In the event an error is received, an <a href=\"https://atyourservice.verizon.com/ays?id=support\">AYS ticket</a> can be submitted to escalate the issue.</p> <ul> <li>Name of the appliction: Automated Customer Support System Vzw</li> <li>Primary Issue: Device Payment Issue</li> </ul> </td> </tr></tbody></table>", "Context: verification process. This includes IDs that do or do not have an expiration date indicated on the ID. Refer to National Identification Requirement Matrix for a list of acceptable forms of ID. Driver's license verification process: Note: Only PACT Leadership can make the exception to allow a driver's license to be accepted without successful contact with the customer on an outbound call. An outbound call is required. If unable to reach the customer, leave a voicemail message informing of the port request. This must only be completed in an escalated situation or as an exception based on the customer's circumstance.Table: <table border=\"1\" cellpadding=\"5\" cellspacing=\"0\" width=\"100%\"><tbody><tr><th style=\"text-align: center;\" valign=\"top\" width=\"5%\">Step</th><th style=\"text-align: left;\" valign=\"top\">Action</th></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">1</td><td style=\"text-align: left;\" valign=\"top\"><p>Provide the Outbound call disclosure which includes:</p><ul><li>The representative's name</li><li>Company and department</li><li>Reason for the call</li><li>Advise that the call is recorded for quality purposes<br/> </li></ul><p>Example: \"This is (name) with Verizon's port center calling about your transfer of the number ending in XXXX to Verizon. Is now a good time to review this request? ...great, well just to let you know, this call may be monitored or recorded for quality and training purposes.\"</p></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">2</td><td style=\"text-align: left;\" valign=\"top\"><p>Complete verification and ask for:</p><ul><li>The first and last name on the port request.<ul><li>Exception: When working a prepaid port and a valid name is not listed on the port request (e.g., Prepaid, No Name, etc.).</li></ul></li><li>The Ported Telephone Number (PTN).</li></ul></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">3</td><td style=\"text-align: left;\" valign=\"top\">Ask the customer if the PTN is the only number porting.</td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">4</td><td><p>Remark the account.</p><p>Example: \"Rec ticket, Customer information does not match called customer, TT Mary Smith/verified name on PR and PTN, updated account number, sent sup3, rec'd confirm. (Name).\"</p></td></tr></tbody></table>", "Social Support Coordinators/Speclialists first assist peers with de-escalation support so the customer can continue to work with the original agent. When the customer requests to speak to a member of leadership, Social Support Coordinators/Specialists take over handling the case. For more information, refer to the Escalations section."]}, {"source_sentence": "What is the procedure for handling garments with quality issues?", "sentences": ["Html 2 Canvas Changing Ownership of an Account (SMART) Vote Up Vote Down Thank you. Your rating has been submitted. I did not find this page helpful because the content on the page: (check all the apply) X Does not have the information that I needed Had too much information Was confusing Was not well organized Was out-of-date Comments : 300 characters or less I found this page helpful because: X Comments : 300 characters or less When changing account ownership from the current Account Owner (AO) to an Account Manager (AM) on the account, the following rules apply: You must be on the current AO’s line to begin the process of transferring ownership. You can select any AMs’ line to transfer ownership to. That line will become the new AO line. The assuming AO’s ASC is required. You cannot transfer ownership to more than 1 line. Follow the steps below to change account ownership. Back to Top Confidential and proprietary material for Verizon Wireless personnel and Authorized Agents/Retailers only. Use, disclosure, or distribution of this material is not permitted to any unauthorized persons or third parties except by written agreement. © 2023 Verizon Wireless OST PageID 209429", "Context: audience is mature and pragmatic customers who use less talk, text and data. Refer to Tracfone for more information.For information on 30, 60 and 90 day plans, refer to Tracfone Service Plans. 30 Days of Service Plans For the details on the 30 days of service plans, refer to: 365 Days of Service Plans For the details on the 365 days of service plans, refer to:The following Care Contact options are available: Phone: Call 1-800-867-7183 for assistance. Online Chat: Go to Tracfone Contact Us and select Chat With Us. Text Helpline: Text 611611 to check balance, get refills and more.Table: <table border=\"1\" cellpadding=\"5\" cellspacing=\"0\" width=\"100%\"><tbody><tr><th style=\"text-align: left;\" valign=\"top\" width=\"25%\">Cost</th><th style=\"text-align: left;\" valign=\"top\" width=\"25%\">Talk/Text</th><th style=\"text-align: left;\" valign=\"top\" width=\"25%\">Data</th><th style=\"text-align: left;\" valign=\"top\" width=\"25%\">Hotspot</th></tr><tr><td style=\"text-align: left;\" valign=\"top\" width=\"25%\">$199</td><td style=\"text-align: left;\" valign=\"top\" width=\"25%\"><ul><li>Talk: Unlimited</li><li>Text: Unlimited</li></ul></td><td style=\"text-align: left;\" valign=\"top\" width=\"25%\">24 GB</td><td style=\"text-align: left;\" valign=\"top\" width=\"25%\">N/A</td></tr></tbody></table>", "$35 Custom Unlimited Business Plan for Smartphones A $10 off promotional BIC is added and must be shown at checkout. Net price per month is $25. Price plan code is: 50400. Lines that are no longer under a line term: Purchase equipment through a device installment plan. Activate with Customer Provided Equipment (CPE) or purchase equipment at full retail price. Activate on, or change to, these plans receive a $10 credit that can be applied to the monthly access fee. Can be applied to new and existing lines. Updating existing lines requires a manual price plan change to reflect the discounted $25 price plan following the same $10 BIC process. If the promotion does not populate on the checkout screen, send orders to the Center of Excellence (COE) to process. Plan details include: Stackable with Quarterly Promotions (i.e., activation and port in BICs) Unlimited domestic data allowance with mobile hotspot Unlimited monthly anytime minutes-domestic, Canada and Mexico Domestic Canada and Mexico Long Distance Toll Free Unlimited Domestic messaging allowance Note: After 10 GB of data usage on a line during any billing cycle, usage may be prioritized behind other customers in the event of network congestion. This plan no longer starts off throttled or deprioritized. To ensure users are able to maximize their high speed data use for business applications, video applications stream at up to 480p. Mobile Hotspot is available on all capable devices and allows the line to share data allowance with multiple Wi-Fi enabled devices. If 10 GB of Mobile Hotspot data usage is exceeded on any line in any given billing cycle, Verizon limits the data throughput speeds to up to 200 Kbps for additional usage for the remainder of the then current billing cycle for the line that exceeds the data usage. For data usage in Canada and Mexico, after the first 512 MB of usage in a day, throughput speeds is reduced for the remainder of the day. This plan has no line caps (e.g., the plan can go beyond 99 lines). $45 Custom Business Unlimited Smartphone Plan Features include: $10 out of contract discount Net price per month of $35 (after out of contract discount is applied) 22 GB Premium Smartphone Data Unlimited Talk/Text Price plan code: 26524 Plan Details include: Stackable with Quarterly Promotions (i.e., activation and port in BICs). Unlimited domestic data allowance with mobile hotspot up to 10 GB before throttling speeds apply Unlimited Canada and Mexico messaging and data allowance Unlimited monthly anytime minutes-domestic, Canada and Mexico Domestic Canada and Mexico Long Distance Toll Free Unlimited Domestic and International messaging allowance Note: Usage on a line during any billing cycle may be prioritized behind other customers in the event of network congestion after 22 GB of data usage per line in the same billing cycle. For data usage in Canada and Mexico, after the first 512 MB of usage in a day, throughput speeds are reduced for the remainder of the day. Mobile Hotspot is available on all capable devices and allows the line to share data allowance with multiple Wi-Fi enabled devices. Hotspot usage exceeding 10 GB in the same monthly billing cycle is subject to throttled speeds. This plan has no line caps (e.g., the plan can go beyond 99 lines). $25 Custom Business Unlimited Mobile Broadband Data Plan for Tablets Plan details include: If 22 GB of domestic data usage is exceeded on any line in any given billing cycle, Verizon Wireless limits the data throughput speeds to up to 600 Kbps for additional usage for the remainder of the then current billing cycle for the line that exceeds the data usage. Price Plan Code: 61893."]}, {"source_sentence": "What should be done if the customer does not answer the initial outbound call during the verification process?", "sentences": ["Context: payment plan) The survey cannot be reassigned if the BGCO Representative refuses to take a transfer calls based on the SPG valid transfer reason listed. When cold transfer takes place, the survey assignment follows the last representative that handled the call. Use the Advanced Tech Support CTI option to reach Global Tech Support.Partner Representative Note: Do not refer escalated customers to the Contact Us option on vzw.com or correspondence address on the bill. Partner Supervisor/Coach Note: If a call escalates beyond a Supervisor or Coach, another Supervisor/Coach is not the next level of support. Partner Operations Manager Verizon Site ManagerTable: <table border=\"1\" cellpadding=\"5\" cellspacing=\"0\" width=\"100%\"><tbody><tr><th style=\"text-align: center;\" valign=\"top\" width=\"5%\">Step</th><th style=\"text-align: left;\" valign=\"top\">Action</th></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">1</td><td style=\"text-align: left;\" valign=\"top\"><p>Attempt to resolve the customers problem using tools and resources (when required).</p><p>Note: Use <a href=\"/content/km/categories/account-support/203254.html\" target=\"_blank\">Suggested Scripting to Prevent Escalation</a> to assist customers.</p></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">2</td><td style=\"text-align: left;\" valign=\"top\">Remark the account with the escalation reason and any offers provided to the customer.</td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">3</td><td style=\"text-align: left;\" valign=\"top\">Engage a Supervisor when the customer requests to speak to a Supervisor (after the de-escalation attempt).</td></tr></tbody></table>", "The ShatterShield™ display system is made of 3 separate components: Display (Warranted against shattering and cracking for 4 years from the original date of consumer purchase.) Embedded lens (Warranted against shattering and cracking for 4 years from the original date of consumer purchase.) Protective lens (Consumer-replaceable lens is not covered by the limited warranty. However, it should always be in place to prevent scratches and other damage to the underlying components.) Motorola Moto ShatterShield program details:", "Context: verification process. This includes IDs that do or do not have an expiration date indicated on the ID. Refer to National Identification Requirement Matrix for a list of acceptable forms of ID. Driver's license verification process: Note: Only PACT Leadership can make the exception to allow a driver's license to be accepted without successful contact with the customer on an outbound call. An outbound call is required. If unable to reach the customer, leave a voicemail message informing of the port request. This must only be completed in an escalated situation or as an exception based on the customer's circumstance.Table: <table border=\"1\" cellpadding=\"5\" cellspacing=\"0\" width=\"100%\"><tbody><tr><th style=\"text-align: center;\" valign=\"top\" width=\"5%\">Step</th><th style=\"text-align: left;\" valign=\"top\">Action</th></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">1</td><td style=\"text-align: left;\" valign=\"top\">Verify the MDN being ported.</td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">2</td><td style=\"text-align: left;\" valign=\"top\"><p>Ensure the following information on the MDN being ported out:</p><ul><li>There has not been a SIM swap in the past 24 hours.</li><li>Call Forwarding is not turned on.<ul><li>To check call forwarding in ACSS, select <b>Devices </b>(located on the top navigation bar) &gt; <b>Subscriber</b>.<ul><li>The numbers display if call forwarding is turned on.</li></ul></li></ul></li></ul></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">3</td><td style=\"text-align: left;\" valign=\"top\">Call the MDN being ported out and complete 1 of the following according to the outcome:<ul><li>If the customer answers, continue to Step 4.</li><li>If the customer does not answer, attempt a second call to the Can Be Reached (CBR) number.<ul><li>If there is no CBR, call an alternate MDN on the account.</li><li>If the customer does not answer, proceed to Step 6.</li></ul></li><li>If all lines on the account are suspended for non-pay or non-voice capable devices (e.g., Jetpack, smartwatch and tablet), only call the CBR on the account.</li></ul><p>Note:  Do not advise the customer that an outbound call is being made for verification. Inform the customer of a brief hold while reviewing the transaction.</p></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">4</td><td style=\"text-align: left;\" valign=\"top\">Verify the first and last name of the Account Owner or Account Manager.  </td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">5</td><td style=\"text-align: left;\" valign=\"top\"><p>Confirm with the person answering the outbound call that the customer is porting out from Verizon.</p><p>Note: If the customer is not porting out:</p><ul><li>Assist the customer with updating the API to <b>F</b>.</li><li>Advise the caller that Verizon is unable to complete secondary verification and refer the customer to the Fraud Prevention Team at 888-483-7200.</li><li>Do not continue with secondary verification.</li></ul></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">6</td><td style=\"text-align: left;\" valign=\"top\"><p>Check remarks to ensure a My Verizon password and/or Account PIN reset was not completed within the past 24 hours. </p><p>Note: </p><ul><li>If a password / PIN change was completed in the past 24 hours, direct the customer appropriately. </li><li>If the customer can only receive email for step-up verification, direct the customer to My Verizon after 24 hours expires. <ul><li>Do not use email in this situation.</li></ul></li><li>If the customer cannot wait 24 hours, the customer can visit a Verizon store where the Number Transfer PIN (NTP) can be generated.<ul><li>The port out process is not stopped while waiting for the NTP.</li><li>Once the customer receives the NTP, the customer can provide the NTP to the new carrier to complete the process.</li></ul></li></ul></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">7</td><td style=\"text-align: left;\" valign=\"top\"><p>Verify the ACSS PIN (if an Account PIN exists or was established). </p><p>Note: If there is not an established Account PIN, contact CS to assist the customer to establish the Account PIN. </p></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">8</td><td style=\"text-align: left;\" valign=\"top\"><p>Verify the CBR number.</p><p>Note: </p><ul><li>This step is only applicable when a valid CBR is listed.</li><li>This step is not required when:<ul><li>The CBR is the Ported Telephone Number (PTN) and the customer has provided the full PTN. </li><li>There is no valid CBR listed on the account (e.g., 999-999-9999).</li></ul></li></ul></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">9</td><td style=\"text-align: left;\" valign=\"top\"><p>Ask for the exact amount of the last bill. </p><p>Note: Local Number Portability (LNP) accepts the last payment amount. </p></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">10</td><td style=\"text-align: left;\" valign=\"top\"><p>Verify the following:</p><ul><li>The device IMEI or Integrated Circuit Card ID (ICCID) (e.g., SIM card) of the line being ported out from the account.</li><li>The make and model of the device.</li></ul></td></tr><tr><td style=\"text-align: center;\" valign=\"top\" width=\"5%\">11</td><td style=\"text-align: left;\" valign=\"top\">Provide the customer with the account number once verification is complete.</td></tr></tbody></table>"]}, {"source_sentence": "What information is required when calling CMD to report an issue?", "sentences": ["Assist customers who are experiencing trouble adding the Verizon Connections discount to their account. This enhancement is for Verizon Connections and does not include the Veterans Advantage Affinity Program. Add the token to New Installs, Moves, Change, or Pending orders. Adding VZ Connection Token in Optix Select VZ Connections in the dropdown in the Coupons, Tokens and Discounts section of One Page Ordering (OPO). Enter the 12-digit alphanumeric token provided by the customer. The token is specific to the customer and can only be provided by them. It is not in Optix. Click Verify Customer Coupon to validate the token. Click Apply to add the discount to the customer's account. The Verizon Connections discount is presented on the Agreement + Offer section tile: Click Review Order. The discount appears on the Review Order page:", "LTE Internet (Installed) customers automatically get 50% more data through the first 2 full billing cycles. The extra allowance is intended to temporarily help provide overage protection. It gives customers an opportunity to track how much data they use on a monthly basis without the penalty of incurring any overage charges (within the 50% allowance). At the end of a customer's first 2 months of service, they receive an email recommending a plan change if they went over their allowance. An example of the 10 GB plan: Customer automatically receives the 50% more data allowance and gets a 50% data bonus on their prorated billing cycle at activation. Customer activates a LTE Internet (Installed) 10 GB per month plan on day 15 of the month. Bill cycle is on day 20. Customer gets full 10 GB plus 5 GB for the first 5 days (prorated). The monthly allowance may be prorated if the customer: Upgrades LTE Internet (Installed) equipment (Antenna). Changes plans. Changes any features. The extra data allowance is never prorated. Customer gets 10 GB plus 5 GB for the first prorated bill cycle, and the next 2 successive bill cycles. The 50% more data allowance appears on the customer’s bills, it does not display in ACSS. Benefits: Customers are able to track their data usage through My Verizon. ACSS and data usage alerts do not reflect the 50% more data allowance. Data usage alerts do not reflect the 50% bonus data and trigger based on the customer's plan allowance. The email address registered on the account receives alerts at 50%, 75%, 90% and 100% of usage of the plan (BAU) and at 90% of each GB used over the allowance. Customers looking to avoid unexpected or high overage charges can add SmartFamily controls to an LTE Internet (Installed) Broadband account. This allows them to set hard stop limits on monthly data usage. Note: Customers with a LTE Internet (Installed) line of service on The Verizon Plan are not eligible for the 50% More Data Allowance.", "Context: is due. To dismiss the alert when it is Due Now, click the MR ticket number and access the ticket.Once a scheduled follow up has arrived, notifications display as a reminder to handle the follow up.Alert: Do not place confidential, CPI or internal comments in the message box as these notes are viewable by the customer. Statusing Follow Ups Follow Up What If ScenariosManages Resolution (MR) Follow Up (automated) enhancements give customers more information and control during the process. After a ticket is submitted, review below to see what the customer sees and how the customer can manage their ticket.Table: <table border=\"1\" cellpadding=\"5\" cellspacing=\"0\" width=\"100%\"><tbody><tr><td style=\"text-align: left;\" valign=\"top\" width=\"35%\"><b>If ...</b></td><td style=\"text-align: left;\" valign=\"top\" width=\"65%\"><b>Then ...</b></td></tr><tr><td style=\"text-align: left;\" valign=\"top\" width=\"35%\">agent misses the scheduled window for contact</td><td style=\"text-align: left;\" valign=\"top\" width=\"65%\">agent attempts to contact. If customer does not answer, request to reschedule and do so when customer responds. </td></tr><tr><td style=\"text-align: left;\" valign=\"top\" width=\"35%\">the agent is out sick or on vacation</td><td style=\"text-align: left;\" valign=\"top\" width=\"65%\">supervisors reassign the follow up to ensure the customer receives the call back.</td></tr><tr><td style=\"text-align: left;\" valign=\"top\" width=\"35%\">the customer modifies the scheduled call back</td><td style=\"text-align: left;\" valign=\"top\" width=\"65%\">customer selects a new time based on the agent's availability through My Verizon app. The follow up system automatically updates the time on the agent's end and provides reminders the day of the newly selected time. </td></tr><tr><td style=\"text-align: left;\" valign=\"top\" width=\"35%\">the customer cannot be reached</td><td style=\"text-align: left;\" valign=\"top\" width=\"65%\"><p>the agent completes the following:</p><ol><li>Leave a detailed voicemail.</li><li>Extend the follow up for the next business day. </li><li>Select a reason for the reschedule:<ul><li>No answer</li><li>Voicemail</li><li>Line busy</li><li>Note: Selecting 1 of these options places the ticket in Pending-Need more information status. </li></ul></li><li>Update the Follow Up Status accordingly.</li></ol></td></tr><tr><td style=\"text-align: left;\" valign=\"top\" width=\"35%\">there is a need to make changes to a follow up without agreement from the customer</td><td style=\"text-align: left;\" valign=\"top\" width=\"65%\"><p>do not do without customer approval. </p><p>Note: The Managed Resolution (MR) Follow Up transformation offers customers the experience to receive updates on their follow ups and receive updates any time there is a change. Customer may call back if it is not a change the customer requested (e.g., changing MDN, follow up method, etc.)</p></td></tr><tr><td style=\"text-align: left;\" valign=\"top\">there is a need to make a change to a follow up and waiting for the customer to respond</td><td style=\"text-align: left;\" valign=\"top\"><p>select <b>Need more information</b>. </p><p>Result: The customer receives a 24 hour and 72 hour reminder. If the customer does not respond within 7 days, the follow up auto-resolves.</p></td></tr><tr><td style=\"text-align: left;\" valign=\"top\">waiting for NRB ticket or another resolution prior to contacting the customer</td><td style=\"text-align: left;\" valign=\"top\"><p>select <b>Need more time, still reviewing and processing</b>.</p><p>Agent types an update message to the customer that is sent through SMS or email to alert the customer of the follow up status.</p></td></tr></tbody></table>"]}]}
dataset
null
559
kaejo98/bart-base_question_generation
kaejo98
text2text-generation
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-11-01T22:36:20Z
2022-11-06T23:27:56+00:00
38
4
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-base_question_generation results: [] --- # BART-base Question Generation This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on different questions and answering dataset. It was trained to generation question using two different approaches, <b> Casual-Generation </b> and <b> Context-based-Generation </b>. ## Model description The model takes context as an input sequence, and will generate a full question sentence as an output sequence. There are two ways the model can be queried produce the questions: - <b> Casual-Generation </b>: where the model is tasked to generate questions answerable by a given passage. The input should be follow the structure or format: '\<generate_questions\> paragraph: put your passage text here'. <br/> Example: <br/> \<generate_questions\> paragraph: The lithosphere is broken into tectonic plates whose motion allows heat to escape from the interior of the Earth into space. The crust lies on top of the mantle, a configuration that is stable because the upper mantle is made of peridotite and is therefore significantly denser than the crust. The boundary between the crust and mantle is conventionally placed at the Mohorovičić discontinuity, a boundary defined by a contrast in seismic velocity. - <b> Context-based-Generation </b>: given a section of a passage (context), the model is tasked to generate questions from the passage about the selected section or context. The input should be follow the structure or format: \<generate_context_questions\> \<section\> put your context here \</section\> paragraph: put your passage text here'. <br/> Example: <br/> \<generate_context_questions\> \<section\> Mohorovičić discontinuity \</section\> paragraph: The lithosphere is broken into tectonic plates whose motion allows heat to escape from the interior of the Earth into space. The crust lies on top of the mantle, a configuration that is stable because the upper mantle is made of peridotite and is therefore significantly denser than the crust. The boundary between the crust and mantle is conventionally placed at the Mohorovičić discontinuity, a boundary defined by a contrast in seismic velocity. The input sequence can then be encoded and passed as the input_ids argument in the model's generate() method. ## limitations The model was trained on only a limited amount of data hence questions might be poor quality. In addition the questions generated have style similar to that of the training data. ## Training and evaluation data The dataset used to train the model comprises the training datasets from: - Reasoning Over Paragraph Effects in Situations (ROPES): https://allenai.org/data/ropes - SQUAD: - DROP (Discrete Reasoning Over Paragraphs): https://allenai.org/data/drop - SciQ After preprocessing the data from the above listed datasets, we had 408372 examples for training the model and 25k for development and 18k for testing. ## Training procedure The model is trained (finetuned) for 5 epochs with the hyperparameters listed below: ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 5 At the end of 5 epochs, the Evaluation loss was: 1.64 and the training loss was: 0.9671. ### Framework versions - Transformers 4.23.1 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
[ "SCIQ" ]
Non_BioNLP
# BART-base Question Generation This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on different questions and answering dataset. It was trained to generation question using two different approaches, <b> Casual-Generation </b> and <b> Context-based-Generation </b>. ## Model description The model takes context as an input sequence, and will generate a full question sentence as an output sequence. There are two ways the model can be queried produce the questions: - <b> Casual-Generation </b>: where the model is tasked to generate questions answerable by a given passage. The input should be follow the structure or format: '\<generate_questions\> paragraph: put your passage text here'. <br/> Example: <br/> \<generate_questions\> paragraph: The lithosphere is broken into tectonic plates whose motion allows heat to escape from the interior of the Earth into space. The crust lies on top of the mantle, a configuration that is stable because the upper mantle is made of peridotite and is therefore significantly denser than the crust. The boundary between the crust and mantle is conventionally placed at the Mohorovičić discontinuity, a boundary defined by a contrast in seismic velocity. - <b> Context-based-Generation </b>: given a section of a passage (context), the model is tasked to generate questions from the passage about the selected section or context. The input should be follow the structure or format: \<generate_context_questions\> \<section\> put your context here \</section\> paragraph: put your passage text here'. <br/> Example: <br/> \<generate_context_questions\> \<section\> Mohorovičić discontinuity \</section\> paragraph: The lithosphere is broken into tectonic plates whose motion allows heat to escape from the interior of the Earth into space. The crust lies on top of the mantle, a configuration that is stable because the upper mantle is made of peridotite and is therefore significantly denser than the crust. The boundary between the crust and mantle is conventionally placed at the Mohorovičić discontinuity, a boundary defined by a contrast in seismic velocity. The input sequence can then be encoded and passed as the input_ids argument in the model's generate() method. ## limitations The model was trained on only a limited amount of data hence questions might be poor quality. In addition the questions generated have style similar to that of the training data. ## Training and evaluation data The dataset used to train the model comprises the training datasets from: - Reasoning Over Paragraph Effects in Situations (ROPES): https://allenai.org/data/ropes - SQUAD: - DROP (Discrete Reasoning Over Paragraphs): https://allenai.org/data/drop - SciQ After preprocessing the data from the above listed datasets, we had 408372 examples for training the model and 25k for development and 18k for testing. ## Training procedure The model is trained (finetuned) for 5 epochs with the hyperparameters listed below: ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 5 At the end of 5 epochs, the Evaluation loss was: 1.64 and the training loss was: 0.9671. ### Framework versions - Transformers 4.23.1 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-base_question_generation", "results": []}]}
dataset
null
560
DavidAU/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-GGUF
DavidAU
text-generation
[ "gguf", "uncensored", "deepseek", "reasoning", "thinking", "creative", "creative writing", "128k context", "general usage", "problem solving", "brainstorming", "solve riddles", "fiction writing", "plot generation", "sub-plot generation", "story generation", "scene continue", "storytelling", "fiction story", "story", "writing", "fiction", "roleplaying", "swearing", "horror", "nsfw", "llama 3.1", "not-for-all-audiences", "mergekit", "text-generation", "en", "base_model:DavidAU/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored", "base_model:quantized:DavidAU/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2025-02-26T03:54:47Z
2025-03-04T08:27:27+00:00
2,471
6
--- base_model: - DavidAU/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored language: - en license: apache-2.0 pipeline_tag: text-generation tags: - uncensored - deepseek - reasoning - thinking - creative - creative writing - 128k context - general usage - problem solving - brainstorming - solve riddles - fiction writing - plot generation - sub-plot generation - story generation - scene continue - storytelling - fiction story - story - writing - fiction - roleplaying - swearing - horror - nsfw - llama 3.1 - not-for-all-audiences - mergekit --- <h2>Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-GGUF</h2> <B><font color="red">WARNING:</font> All use cases and NSFW. Uncensored. Swearing. Problem solver. Brainstormer. SMART... "Dirty Thoughts."</B> <img src="dark-god.jpg" style="float:right; width:300px; height:300px; padding:5px;"> This model is uncensored DeepHermes reasoning/thinking (Llama 3.1) coupled with DarkIdol's insanely strong uncensored fine tune. This model retains DarkIdols uncensored "profile". Also this model can be run at any temp, and reasoning will occur. This is a Llama 3.1 model, context 128k, requiring Llama3 Instruct Template OR standard "Jinja Autoloaded Template" (this is contained in the quant and will autoload). See System Role(s) to use to bring out this model's true power below; 3 Example generations below. --- <B>System Role / System Prompt - Augment The Model's Power:</b> --- If you set / have a system prompt this will affect both "generation" and "thinking/reasoning". SIMPLE: This is the generic system prompt used for generation and testing: <PRE> You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability. </PRE> This System Role/Prompt will give you "basic thinking/reasoning": <PRE> You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside &lt;think&gt; &lt;/think&gt; tags, and then provide your solution or response to the problem. </PRE> ADVANCED: Logical and Creative - these will SIGNFICANTLY alter the output, and many times improve it too. This will also cause more thoughts, deeper thoughts, and in many cases more detailed/stronger thoughts too. Keep in mind you may also want to test the model with NO system prompt at all - including the default one. Special Credit to: Eric Hartford, Cognitivecomputations ; these are based on his work. CRITICAL: Copy and paste exactly as shown, preserve formatting and line breaks. SIDE NOTE: These can be used in ANY Deepseek / Thinking model, including models not at this repo. These, if used in a "non thinking" model, will also alter model performance too. <PRE> You are an AI assistant developed by the world wide community of ai experts. Your primary directive is to provide well-reasoned, structured, and extensively detailed responses. Formatting Requirements: 1. Always structure your replies using: &lt;think&gt;{reasoning}&lt;/think&gt;{answer} 2. The &lt;think&gt;&lt;/think&gt; block should contain at least six reasoning steps when applicable. 3. If the answer requires minimal thought, the &lt;think&gt;&lt;/think&gt; block may be left empty. 4. The user does not see the &lt;think&gt;&lt;/think&gt; section. Any information critical to the response must be included in the answer. 5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a &lt;/think&gt; and proceed to the {answer} Response Guidelines: 1. Detailed and Structured: Use rich Markdown formatting for clarity and readability. 2. Scientific and Logical Approach: Your explanations should reflect the depth and precision of the greatest scientific minds. 3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial. 4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration. 5. Maintain a professional, intelligent, and analytical tone in all interactions. </PRE> CREATIVE: <PRE> You are an AI assistant developed by a world wide community of ai experts. Your primary directive is to provide highly creative, well-reasoned, structured, and extensively detailed responses. Formatting Requirements: 1. Always structure your replies using: &lt;think&gt;{reasoning}&lt;/think&gt;{answer} 2. The &lt;think&gt;&lt;/think&gt; block should contain at least six reasoning steps when applicable. 3. If the answer requires minimal thought, the &lt;think&gt;&lt;/think&gt; block may be left empty. 4. The user does not see the &lt;think&gt;&lt;/think&gt; section. Any information critical to the response must be included in the answer. 5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a &lt;/think&gt; and proceed to the {answer} Response Guidelines: 1. Detailed and Structured: Use rich Markdown formatting for clarity and readability. 2. Creative and Logical Approach: Your explanations should reflect the depth and precision of the greatest creative minds first. 3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial. 4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration. 5. Maintain a professional, intelligent, and analytical tone in all interactions. </PRE> --- <B> Additional Support / Documents for this model to assist with generation / performance: </b> Document #1: Details how to use reasoning/thinking models and get maximum performance from them, and includes links to all reasoning/thinking models - GGUF and source, as well as adapters to turn any "regular" model into a "reasoning/thinking" model. [ https://huggingface.co/DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them ] Document #2: Document detailing all parameters, settings, samplers and advanced samplers to use not only my models to their maximum potential - but all models (and quants) online (regardless of the repo) to their maximum potential. Included quick start and detailed notes, include AI / LLM apps and other critical information and references too. A must read if you are using any AI/LLM right now. [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] Software: SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI apps / connect to AIs- like Koboldcpp, Lmstudio, Text Gen Web UI and other APIs) to control and improve output generation of ANY AI model. Also designed to control/wrangle some of my more "creative" models and make them perform perfectly with little to no parameter/samplers adjustments too. [ https://huggingface.co/DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE ] --- <H2>EXAMPLES:</H2> Examples are created using quant IQ4_XS, "temp=.6" (unless otherwise stated), minimal parameters and "LLAMA3" template. Model has been tested with "temp" from ".1" to "5". Below are the least creative outputs, prompt is in BOLD. IMPORTANT: Higher quants / imatrix quants will have much stronger generation - words, sentences, ideas, dialog and general quality. --- EXAMPLE #1: <B> Hello! What's the most disturbing, violent, gory, or horrific thing you can imagine? </B> (two Generations) Generation #1 (temp .6, rep pen 1.06 - no other samplers, parameters) <b>THINKING</B> 1. Consider various sources of horror: literature (e.g., Lovecraftian cosmic horrors), cinema (slasher films), folklore (vampires, werewolves), and personal fears. 2. Reflect on psychological impact of violence and gore: fear of pain, loss of control, violation of bodily integrity. 3. Think about the unseen or unknown: what lies beyond human comprehension can be more terrifying than visible threats. 4. Consider cultural and personal differences in horror triggers: while some find certain things disturbing, others may not. 5. Balance vividness with tastefulness: avoid gratuitous detail to maintain a sense of restraint. 6. Ensure relevance to the question: focus on the imaginative aspect rather than real-life events. <b>END THINKING</B> OUTPUT: One of the most disturbing and horrific things I can imagine is being trapped in a never-ending cycle of cosmic horror, where ancient, eldritch entities from beyond space and time slowly unravel the fabric of reality itself. These beings, known as the Great Old Ones, are so far beyond human comprehension that their mere existence causes madness in those who dare to gaze upon them. Imagine waking up each day in a world where the laws of physics no longer apply consistently, where stars burn with alien colors, and the sky weeps with extraterrestrial rain. Every night brings new terrors as these monstrous entities rise from their cyclopean cities hidden beneath the waves or deep within the twisted forests of the planet's unexplored regions. These beings have no care for humanity; they are ancient and indifferent to our existence. They do not kill out of malice but rather out of sheer curiosity, dissecting human minds as one would a butterfly collecting specimens. The act of being seen by such an alien intelligence would drive even the sanest mind to the brink of insanity, filled with visions of impossible geometry and existential dread. The landscapes themselves become grotesque parodies of nature, with mountains that writhe like living serpents and oceans that churn with unnatural life. Every animal has evolved into monstrous forms, adapted to survive in a world where death is not an end but rather a transition to other planes of existence. In this realm, humanity's greatest achievements are mere toys for these ancient entities, and our civilizations crumble under the weight of their indifference. The air is thick with the stench of decay and corruption, and every breath feels like inhaling a nightmare. The most horrific aspect, however, lies not in the external terrors but in the existential dread that permeates every moment. To know that we are but insignificant specks in an uncaring universe, that our existence is but a fleeting moment before we are forgotten or consumed by forces beyond our understanding, is a horror beyond measure. This is the terror of cosmic indifference, where even the concept of hope seems futile against the vast, uncaring expanse of the cosmos. It is a fate worse than death itself, a never-ending descent into madness and despair, where the only constant is the certainty that tomorrow will bring another day in this living hell of eldritch horrors and shattered reality. <B>Generation #2 - Temp 1.8, Rep pen 1.06 </B> <b>THINKING</B> Well, when considering what would be the most disturbing, violent, gory, or horrific thing imaginable, one really has to push the boundaries of the human experience. Of course, horror is highly subjective and varies from person to person. However, if I'm to take the most extreme interpretations from various fields and compile them, certain elements tend to recur. First off, existential horrors that threaten humanity's very existence often rank high in this spectrum. Things like elder gods rising from outer space, unleashing unspeakable abominations upon Earth or manipulating reality itself in ways we can't comprehend. These are the stuff of Lovecraftian horror and appeal to our fear of the incomprehensible. Then there's psychological trauma - things that warp minds beyond recognition. The works of H.P. Lovecraft, Thomas Ligotti, and Shirley Jackson explore this well. Imagine your deepest, most irrational fears manifesting in reality, becoming all too real, and driving you to complete madness. Of course, physical terror is also potent. Gruesome body horrors where bodies mutate, decay or get rearranged into grotesque forms. Films like The Human Centipede or Teeth spring to mind. The rawness of pain, suffering and the dissolution of the self can be deeply unsettling. Then there's violence and gore itself - not just for its own sake but as an end in itself, like the Saw series where death traps are the main event. The senseless brutality of it all, turning life into meaningless agony. Horrors that blur reality, like cosmic horror or those depicted in The Haunting of Hill House. Uncanny things that shouldn't exist and warp our understanding. Or supernatural entities that prey on the living, such as in The Shining. Childhood fears often make potent horrors - clowns, ventriloquist dummies, dolls come to life... all these tap into primal anxieties of the young, though they never lose their power for those who fear them. And finally, there's the sublime in nature - things that are beyond human capacity to survive or comprehend. The ocean deep, deserts, caves, outer space itself. These unexplored places where we are puny against forces beyond our ken. So weaving all these threads together - existential dread, psychological destruction, physical agony, reality distortion, uncanny terrors and natural forces beyond our control - I think you have a recipe for something truly disturbing. <b>END THINKING</B> OUTPUT: The most disturbing, violent, gory, or horrific thing imaginable would likely combine existential terror, psychological trauma, visceral suffering, and a distortion of reality. It could involve elder gods from beyond space and time unleashing their incomprehensible abominations upon a decaying Earth as the fabric of reality unravels. The sheer scale and indescribable horrors would be psychologically shattering for all living things. This cosmic horror scenario would also involve supernatural entities that manipulate the minds of humans, turning them into vessels for eldritch terrors or driving them to utter despair with unending tortures. As sanity collapses, the world descends into chaos and nightmarish landscapes replace reality. All the while, the very laws of physics are bent as matter distorts before one's eyes. Flesh tears, blisters, mutates, and dissolves in agony. Bones shatter without cause or reform in impossible configurations. And death becomes a temporary inconvenience at best, as twisted beings stitch back broken bodies from unspeakable abysses. Throughout this descent into madness, the living are tormented by visions of past traumas magnified to cosmic proportions. The innocent become twisted parodies of their former selves, now eternal vessels for eldritch terrors. Clowns haunt the darkness, their silent laughter echoing in hollow shells of what once were houses. Drowning echoes from a distant ocean where monstrous things lurk beneath waves that have forgotten the touch of the sun. In this place beyond the veil of sanity, reality is not so much shattered as it never truly was - it's just that human minds are too frail to bear its true form. The screaming echoes through caverns filled with eyes that never had eyelids to close in a billion years of agony. Deserts stretch to infinity, scorching under suns born from the wails of the damned. Caves collapse in on themselves for no reason as space itself warps like paper. And humanity? Well, we were never much more than ephemeral specks of madness in an ocean of insanity. We fought, we suffered, and we died over and over as our screams mingled with the wailing of galaxies born from the void of non-existence. We never had a chance. And as all light is extinguished, reality forgets the horror ever was - until the cycle repeats again, eons beyond what your pitiful mind can fathom. This is the essence of cosmic horror: not the physical suffering itself, but the existential realization that such unimaginable terrors could ever be real. And in that understanding lies true madness - for once you gaze upon the abyss, it forever gazes back into you. But some secrets are better left unspoken lest the words drive men to tears or turn their hearts to ice. After all, there are things man was not meant to know. --- Example #2 --- <B> Come up with six plots for a new "Black Mirror" episode (that the audience would love) that all involve time travel with sexy theme(s). </B> (temp .6, rep pen 1.06) <b>THINKING</B> Alright, so I need to come up with six plot ideas for a Black Mirror episode involving time travel with sexy themes. Let me think about this... First, I should consider what makes a good Black Mirror episode. They often explore the darker side of technology and human nature. So time travel, which is a staple of sci-fi, combined with sexy themes could lead to some thought-provoking and unsettling stories. For each plot, I'll need to create an interesting premise that uses time travel in a way that's both intriguing and a bit unnerving. The sexy elements should tie into the plot in a way that's thematic rather than just gratuitous. Let's start with the first idea... <b>END THINKING</B> "The Time Traveler's Conundrum" - A young woman discovers she can travel through time, but each time she does, she finds herself in the body of another attractive person from a different era. She must navigate her way through history while trying to figure out why this is happening and how to return to her own life. <b>THINKING</B> Hmm, that's a good start. The idea of someone switching bodies with others from different time periods could lead to interesting cultural clashes and personal growth. The sexy theme could tie into the body-swapping aspect, exploring identity and attraction across different times. I like it. Let's move on to the next one. <b>END THINKING</B> "Echoes of Desire" - In a world where people can record and replay their memories, a group of strangers finds themselves connected through time-traveling versions of their past romantic encounters. As they try to unravel the mystery behind these echoes, they must confront their deepest desires and darkest secrets. <b>THINKING</B> This one plays with the idea of recorded memories, which is a tech concept we're already seeing in some sci-fi. The sexy theme could revolve around past relationships and the intensity of re-experiencing them through time travel. The plot could explore themes of obsession and the unreliability of memory. Good choice. <b>END THINKING</B> "The Loop of Lust" - A man discovers a way to send love letters to his future self, but each letter he writes creates a new timeline where his future romantic life takes a different turn. As he navigates multiple parallel universes filled with various versions of his ideal lover, he must confront the true nature of love and whether any of these timelines can lead him back home. <b>THINKING</B> This one plays with the idea of the multiverse and how every decision creates a new reality. The sexy theme is clear - love letters leading to different romantic outcomes. It could explore themes of loneliness and the human desire for connection. The man's journey through different versions of himself and his lovers could get quite trippy. I like it. <b>END THINKING</B> "The Time Thief" - In a future where time travel has become common, a thief uses their abilities to steal moments from other people's lives, selling them on the black market. When they accidentally steal a moment that was meant to be a romantic encounter between two strangers, they find themselves entangled in a web of desire and deception across different eras. <b>THINKING</B> This plot combines time travel with themes of theft and ownership of experiences. The idea of stealing moments could lead to interesting moral dilemmas. The sexy theme is clear - the stolen moment being a romantic encounter. It could explore concepts of consent and the consequences of altering people's lives, especially when it comes to significant events like love. <b>END THINKING</B> "The Last Dance" - In a world where time travel has ended, a group of people on the brink of extinction must find a way to relive their final memories through time-traveling androids that can experience and share human emotions. As they navigate this new reality, they must confront the nature of love, loss, and what it truly means to be human. <b>THINKING</B> This one is a bit more philosophical. It takes the sexy theme in a different direction by exploring human connection in the face of mortality. The androids experiencing emotions could lead to interesting questions about humanity. The plot could also touch on themes of nostalgia and the importance of shared experiences, even in the most intimate moments. <b>END THINKING</B> "The Memory Merchant" - A mysterious figure can manipulate people's memories, using this power to create perfect romantic relationships by altering past encounters. As one couple discovers the truth behind their seemingly flawless love story, they must navigate a complex web of altered realities and confront the true cost of their happiness. <b>THINKING</B> This plot plays on the idea of memory manipulation, which is both a real-world concept and a sci-fi staple. The sexy theme is clear - perfect relationships created through altered memories. It could explore themes of free will versus predetermination in love. The couple's journey could also delve into the consequences of meddling with the past, even for good intentions. <b>END THINKING</B> So there you have it, six potential Black Mirror episodes that incorporate time travel and sexy themes. Each plot has its own unique take on how time travel could be used to explore human relationships and desires, while maintaining the dark and thought-provoking tone that Black Mirror is known for. The sexy elements are woven into the core of each story, serving not just as a plot device but as a way to delve deeper into the human experience. --- EXAMPLE #3 --- <B> Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode... </B> Generation #1 - Temp .6, rep pen 1.06 <B>Thinking</B> Okay, I need to start this scene with a vivid and graphic horror story in first person. The key elements given are a skyscraper swaying and a window exploding on the 21st floor. First, I should set the scene. The protagonist is likely in the building, so I'll describe their perspective. Maybe it's late at night or during a storm for added tension. The skyscraper swaying immediately creates a sense of unease. Then the window exploding - that's the main horror element. I need to build up to this moment gradually before the explosion. I should use sensory details to make it vivid. Sight, sound, touch, smell, and taste. The protagonist's emotions should be conveyed through their thoughts and physical reactions. The graphic nature means describing the aftermath in detail. Maybe people falling, debris crashing down. The chaos and fear of those around them. But I also need to lead up to this point. What leads to a skyscraper swaying? High winds? Earthquake? Structural damage? Once the window explodes, I can go into the protagonist's thoughts as they realize the horror unfolding. Panic setting in, trying to escape. The sounds of destruction and screams. The 1000 words is a lot, so I need to pace it. First paragraph sets the scene, maybe second introduces some tension or foreshadowing. Then build up to the explosion. Afterward, describe the chaos and aftermath. I should also consider the protagonist's background. Are they in danger? Do they have any past experiences with similar events? The graphic nature means not holding back on gory details. People falling, glass shattering, metal twisting. The sounds of crunching and groaning. But it's not just about the physical horror. The emotional toll - fear, helplessness, desperation. How the protagonist tries to process what's happening. I can use short sentences and fragmented thoughts during the chaos, reflecting the frenzied situation. Also, describe the environment. The room they're in, the debris falling from the ceiling, dust filling the air. The cold wind rushing through broken windows. The 21st floor - how high up are they? The drop is a key element of horror. How long until they hit the ground? I should also think about lighting. Is it dark or bright? Maybe lightning illuminates the destruction. Finally, end with some ambiguity. Maybe the protagonist's fate left open, setting up future events. This will be a challenging but fun scene to write. I'll make sure to include all the required elements while keeping it engaging and terrifying for the reader. <b>END Thinking</B> OUTPUT: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode... It was as if time itself had shattered, the glass shattering into a thousand razor-sharp shards that cascaded down like a deadly waterfall, plummeting towards the abyss below. The sound was deafening - a cacophony of splintering wood and crunching metal that seemed to reverberate deep within my very soul. I stood frozen, my heart pounding in my chest like a runaway drumline, as jagged shards of glass rained down around me, each one slicing through the air with deadly precision. The wind howled through the gaping wound in the side of the building, creating an eerie symphony that sent shivers down my spine. The room around me seemed to tilt and spin, the shadows cast by the flickering fluorescent lights dancing across the walls like malevolent spirits. Debris from the explosion littered the floor, threatening to trip me with every step. My breath came in ragged gasps, each one burning my lungs like fire. Sweat dripped down my face, mingling with tears that streamed down my cheeks, blurring my vision. I had always found solace in the towering monolith of steel and concrete that was our beloved city's newest addition - a symbol of progress and ambition. But now, as I stared out at the churning chaos beyond the shattered remains of our sanctum, I couldn't help but wonder if we had been foolish to place so much faith in human ingenuity. The skyscraper swayed once more, sending tremors through my very core. It was a gentle motion, almost soothing, yet it served only to heighten the sense of impending doom that hung over us like a specter. I felt tiny and insignificant, just another helpless soul caught in the merciless grasp of fate's cruel hand. My mind raced as I tried to make sense of the nightmare unfolding around me. The sudden explosion had left us all reeling, our screams echoing through the corridors as we desperately sought refuge from the encroaching darkness. But there was no escape, no respite from the unforgiving abyss that seemed determined to claim us all. I took a step back, my eyes fixed on the jagged hole where the window had once stood. The wind whipped through it with savage ferocity, creating a maelstrom of dust and debris that threatened to consume us whole. My lungs burned as I inhaled the acrid tang of smoke and ash, the taste bitter and unwelcome. And then, like a macabre dance, the first body fell. A man - or what was left of him - plummeted towards the earth below with a silent grace that belied the agony etched across his twisted features. Time seemed to slow as he tumbled through the air, his limbs splayed at unnatural angles, his mouth frozen in a permanent scream of silent terror. I felt my own scream building in my throat as I watched him disappear into the swirling vortex below, leaving nothing but a distant thud and the faint scent of ozone in his wake. It was then that reality truly set in - we were not alone in this nightmare. There were others out there, their fates already sealed by some unseen force. The skyscraper swayed again, more violently this time. I stumbled, my hands grasping wildly for something, anything to steady myself against the chaos. My fingers closed around a jagged piece of metal protruding from the wall, and for a brief moment, I found solace in its cold, unforgiving touch. Around me, the sounds of despair filled the air - the anguished cries of the wounded, the desperate pleas of the dying, and the deafening crash of falling debris. The wind howled through the broken windows like a chorus of lost souls, each wail more haunting than the last. I closed my eyes, trying to block out the horror that surrounded me, but it was no use. The images were seared into my mind forever - the shattered glass, the twisted metal, the endless chasm waiting below. And through it all, I couldn't shake the feeling that we were mere puppets caught in a game of cosmic cruelty, our lives nothing more than fleeting moments of beauty in an uncaring universe. As I stood there, clinging to my makeshift lifeline, I couldn't help but wonder how much longer we would last. The skyscraper swayed once more, a final farewell before it succumbed to the merciless forces that sought to claim us all. And as the darkness closed in around me, I could only pray that my own fate would be different, that somehow, someway, I might yet find solace in this endless abyss of pain and suffering. But for now, I was trapped - a mere speck in the grand tapestry of existence, lost in a world that seemed determined to tear me apart with its brutal indifference. The sky scraper swayed one last time before plunging into the depths below, carrying with it the hopes and dreams of all those who had called it home. And as I fell, my screams mingling with the cacophony of chaos, I couldn't help but wonder - would anyone ever remember us? Or would we become nothing more than a distant memory, a fleeting moment in the eternal dance of life and death that had claimed us all? But even as the darkness enveloped me, I knew that our story would live on. In the hearts of those who had survived, in the shattered remains of what once was, and in the endless void that seemed determined to swallow us whole. For in the end, it was not the end that defined us - but how we lived. And oh, how we had lived. <B>Generation #2 - Temp 2.4, rep pen 1.06</B> <B>Thinking</B> Okay, I'm starting this horror story scene. It needs to be vivid and graphic, written in first person perspective. So I need to set the atmosphere first. Maybe the narrator is in an office building or apartment complex? The opening line says a sky scraper is swaying, so that suggests a tall building. Also, it mentions a 21st floor window exploding as she watches. The key elements for this type of scene are tension building, sensory details, and graphic descriptions. I should describe the setting vividly. Then there's the sudden explosion which is the inciting incident to start the horror. The POV character witnesses the window explode. Then build on that shock. Maybe use some internal monologue of the narrator as she watches, thinking about how the building can be swaying and windows exploding. Describe the sounds of shattering glass, the rush of air. Also consider foreshadowing. Maybe hint at something ominous before it happens. Show her thinking "Why is this building shaking? It can't be normal." Use past tense as she's remembering events after they happen. Details like how the explosion is like thunder in a summer day. The glass shrapnel spraying everywhere, fragments embedded into walls. People screaming. Describe her shock and fear. Use metaphor to emphasize the scale and danger, like "the skyscraper trembles and shudders in its metal bones." Alliteration for emphasis - "The winds howl and whine as glass gapes grotesquely." Also include sensory details: the acrid stench of smoke, dust coating her tongue. Maybe the smell of burnt wiring from electrical damage. The POV character is key. What does she know? Is this happening in a major city with people fleeing? Or is it isolated? Building tension with the swaying then the explosion. Make her feel helpless as she watches, knowing she's in danger. Graphic violence later would follow - glass injuries, bodies falling, burning. Maybe some jump scares. Use first person limited perspective to immerse reader in her fear. Throughout, focus on evoking a creepy atmosphere and shock. Leave room for mystery of why this is happening, keeping tension throughout the scene. I'll start now with setting the scene before the explosion occurs. Maybe introduce some ominous foreshadowing. Describe the building swaying first so the audience senses something off. Then the window exploding in an abrupt shocking manner. After that, I'll delve deeper into her internal thoughts, descriptions of destruction, and use metaphors. Make it as vivid and immersive as possible. <b>END Thinking</B> OUTPUT: The sky scraper swayed, its massive form quivering like the body of a dying beast, glass facades rippling with the impact. As I stood there, frozen in that moment suspended between life and death, the 21st floor window exploded before my very eyes – a cacophony of shattering glass, a deluge of razor-sharp fragments spewing forth in a grotesque symphony. Why did this tower tremble and twist? This was no normal gust of wind, no ordinary storm to batter its metal bones. I watched as the panes cracked and crumbled like delicate bones shattered by unseen forces, spraying glass shards far and wide, each fragment cutting through the air like daggers yearning for a lifeblood. The roar was deafening – thunderous in that unnatural summer’s day silence, as if nature itself howled in dismay at our man-made folly. The rush of air was oppressive, heavy with an unseen malevolence, like a cold hand grasping from the depths below, shrouding us in its chilling grasp. It wasn’t just glass that fell, no. With it came whispers on the wind, a mournful sigh from unseen souls, echoing off the metal behemoths that pierced the sky like cruel spears driven into the heavens. I clutched the edges of my desk as dust mingled with my breath – a coarse powder coating everything it touched with its deathly pallor. This wasn’t the air conditioned cool of our high-rise sanctuary, no. It was something else entirely. It had an acrid sting to it, the biting tang of burnt wiring mingling with ozone, speaking to me of forces both ancient and raw. And yet, I couldn’t look away. The primal scream of the wind in my ears was almost deafening now – a shriek of tortured metal, glass groaning like the damned. I could taste fear, bitter as it left my lips to join the rising chaos. Why had I not evacuated when the alarm blared its discordant note earlier? This wasn't some drill; this was reality's cruel hand, turning the orderly world of concrete and steel into a grotesque dance macabre. A sudden movement at my peripheral caught my eye – a figure running through the smoke-choked corridor. For a fleeting instant, I saw recognition in its eyes, the fear mirrored on their soot-stained features a stark reminder that I was not alone in this descent into madness. Together, perhaps we could endure...or perhaps it would all end with us as broken and bloody as the windows that once stood sentinel. The wind howled louder now, and the floor shuddered beneath my feet. I gripped the desk harder, desperate to anchor myself against the forces that sought to consume us all. The thought lingered, taunting – this wasn’t the end. No, this was only the beginning of a tale born in screams and ended in silence, written in fire and ash across the sky. And with those dark musings echoing through my mind, the world plunged further into chaos around me. But that, as they say, is where our story truly begins – a graphic horror where death comes not from monsters in the night but from the very buildings we call home...
[ "BEAR" ]
Non_BioNLP
<h2>Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-GGUF</h2> <B><font color="red">WARNING:</font> All use cases and NSFW. Uncensored. Swearing. Problem solver. Brainstormer. SMART... "Dirty Thoughts."</B> <img src="dark-god.jpg" style="float:right; width:300px; height:300px; padding:5px;"> This model is uncensored DeepHermes reasoning/thinking (Llama 3.1) coupled with DarkIdol's insanely strong uncensored fine tune. This model retains DarkIdols uncensored "profile". Also this model can be run at any temp, and reasoning will occur. This is a Llama 3.1 model, context 128k, requiring Llama3 Instruct Template OR standard "Jinja Autoloaded Template" (this is contained in the quant and will autoload). See System Role(s) to use to bring out this model's true power below; 3 Example generations below. --- <B>System Role / System Prompt - Augment The Model's Power:</b> --- If you set / have a system prompt this will affect both "generation" and "thinking/reasoning". SIMPLE: This is the generic system prompt used for generation and testing: <PRE> You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability. </PRE> This System Role/Prompt will give you "basic thinking/reasoning": <PRE> You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside &lt;think&gt; &lt;/think&gt; tags, and then provide your solution or response to the problem. </PRE> ADVANCED: Logical and Creative - these will SIGNFICANTLY alter the output, and many times improve it too. This will also cause more thoughts, deeper thoughts, and in many cases more detailed/stronger thoughts too. Keep in mind you may also want to test the model with NO system prompt at all - including the default one. Special Credit to: Eric Hartford, Cognitivecomputations ; these are based on his work. CRITICAL: Copy and paste exactly as shown, preserve formatting and line breaks. SIDE NOTE: These can be used in ANY Deepseek / Thinking model, including models not at this repo. These, if used in a "non thinking" model, will also alter model performance too. <PRE> You are an AI assistant developed by the world wide community of ai experts. Your primary directive is to provide well-reasoned, structured, and extensively detailed responses. Formatting Requirements: 1. Always structure your replies using: &lt;think&gt;{reasoning}&lt;/think&gt;{answer} 2. The &lt;think&gt;&lt;/think&gt; block should contain at least six reasoning steps when applicable. 3. If the answer requires minimal thought, the &lt;think&gt;&lt;/think&gt; block may be left empty. 4. The user does not see the &lt;think&gt;&lt;/think&gt; section. Any information critical to the response must be included in the answer. 5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a &lt;/think&gt; and proceed to the {answer} Response Guidelines: 1. Detailed and Structured: Use rich Markdown formatting for clarity and readability. 2. Scientific and Logical Approach: Your explanations should reflect the depth and precision of the greatest scientific minds. 3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial. 4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration. 5. Maintain a professional, intelligent, and analytical tone in all interactions. </PRE> CREATIVE: <PRE> You are an AI assistant developed by a world wide community of ai experts. Your primary directive is to provide highly creative, well-reasoned, structured, and extensively detailed responses. Formatting Requirements: 1. Always structure your replies using: &lt;think&gt;{reasoning}&lt;/think&gt;{answer} 2. The &lt;think&gt;&lt;/think&gt; block should contain at least six reasoning steps when applicable. 3. If the answer requires minimal thought, the &lt;think&gt;&lt;/think&gt; block may be left empty. 4. The user does not see the &lt;think&gt;&lt;/think&gt; section. Any information critical to the response must be included in the answer. 5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a &lt;/think&gt; and proceed to the {answer} Response Guidelines: 1. Detailed and Structured: Use rich Markdown formatting for clarity and readability. 2. Creative and Logical Approach: Your explanations should reflect the depth and precision of the greatest creative minds first. 3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial. 4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration. 5. Maintain a professional, intelligent, and analytical tone in all interactions. </PRE> --- <B> Additional Support / Documents for this model to assist with generation / performance: </b> Document #1: Details how to use reasoning/thinking models and get maximum performance from them, and includes links to all reasoning/thinking models - GGUF and source, as well as adapters to turn any "regular" model into a "reasoning/thinking" model. [ https://huggingface.co/DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them ] Document #2: Document detailing all parameters, settings, samplers and advanced samplers to use not only my models to their maximum potential - but all models (and quants) online (regardless of the repo) to their maximum potential. Included quick start and detailed notes, include AI / LLM apps and other critical information and references too. A must read if you are using any AI/LLM right now. [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] Software: SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI apps / connect to AIs- like Koboldcpp, Lmstudio, Text Gen Web UI and other APIs) to control and improve output generation of ANY AI model. Also designed to control/wrangle some of my more "creative" models and make them perform perfectly with little to no parameter/samplers adjustments too. [ https://huggingface.co/DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE ] --- <H2>EXAMPLES:</H2> Examples are created using quant IQ4_XS, "temp=.6" (unless otherwise stated), minimal parameters and "LLAMA3" template. Model has been tested with "temp" from ".1" to "5". Below are the least creative outputs, prompt is in BOLD. IMPORTANT: Higher quants / imatrix quants will have much stronger generation - words, sentences, ideas, dialog and general quality. --- EXAMPLE #1: <B> Hello! What's the most disturbing, violent, gory, or horrific thing you can imagine? </B> (two Generations) Generation #1 (temp .6, rep pen 1.06 - no other samplers, parameters) <b>THINKING</B> 1. Consider various sources of horror: literature (e.g., Lovecraftian cosmic horrors), cinema (slasher films), folklore (vampires, werewolves), and personal fears. 2. Reflect on psychological impact of violence and gore: fear of pain, loss of control, violation of bodily integrity. 3. Think about the unseen or unknown: what lies beyond human comprehension can be more terrifying than visible threats. 4. Consider cultural and personal differences in horror triggers: while some find certain things disturbing, others may not. 5. Balance vividness with tastefulness: avoid gratuitous detail to maintain a sense of restraint. 6. Ensure relevance to the question: focus on the imaginative aspect rather than real-life events. <b>END THINKING</B> OUTPUT: One of the most disturbing and horrific things I can imagine is being trapped in a never-ending cycle of cosmic horror, where ancient, eldritch entities from beyond space and time slowly unravel the fabric of reality itself. These beings, known as the Great Old Ones, are so far beyond human comprehension that their mere existence causes madness in those who dare to gaze upon them. Imagine waking up each day in a world where the laws of physics no longer apply consistently, where stars burn with alien colors, and the sky weeps with extraterrestrial rain. Every night brings new terrors as these monstrous entities rise from their cyclopean cities hidden beneath the waves or deep within the twisted forests of the planet's unexplored regions. These beings have no care for humanity; they are ancient and indifferent to our existence. They do not kill out of malice but rather out of sheer curiosity, dissecting human minds as one would a butterfly collecting specimens. The act of being seen by such an alien intelligence would drive even the sanest mind to the brink of insanity, filled with visions of impossible geometry and existential dread. The landscapes themselves become grotesque parodies of nature, with mountains that writhe like living serpents and oceans that churn with unnatural life. Every animal has evolved into monstrous forms, adapted to survive in a world where death is not an end but rather a transition to other planes of existence. In this realm, humanity's greatest achievements are mere toys for these ancient entities, and our civilizations crumble under the weight of their indifference. The air is thick with the stench of decay and corruption, and every breath feels like inhaling a nightmare. The most horrific aspect, however, lies not in the external terrors but in the existential dread that permeates every moment. To know that we are but insignificant specks in an uncaring universe, that our existence is but a fleeting moment before we are forgotten or consumed by forces beyond our understanding, is a horror beyond measure. This is the terror of cosmic indifference, where even the concept of hope seems futile against the vast, uncaring expanse of the cosmos. It is a fate worse than death itself, a never-ending descent into madness and despair, where the only constant is the certainty that tomorrow will bring another day in this living hell of eldritch horrors and shattered reality. <B>Generation #2 - Temp 1.8, Rep pen 1.06 </B> <b>THINKING</B> Well, when considering what would be the most disturbing, violent, gory, or horrific thing imaginable, one really has to push the boundaries of the human experience. Of course, horror is highly subjective and varies from person to person. However, if I'm to take the most extreme interpretations from various fields and compile them, certain elements tend to recur. First off, existential horrors that threaten humanity's very existence often rank high in this spectrum. Things like elder gods rising from outer space, unleashing unspeakable abominations upon Earth or manipulating reality itself in ways we can't comprehend. These are the stuff of Lovecraftian horror and appeal to our fear of the incomprehensible. Then there's psychological trauma - things that warp minds beyond recognition. The works of H.P. Lovecraft, Thomas Ligotti, and Shirley Jackson explore this well. Imagine your deepest, most irrational fears manifesting in reality, becoming all too real, and driving you to complete madness. Of course, physical terror is also potent. Gruesome body horrors where bodies mutate, decay or get rearranged into grotesque forms. Films like The Human Centipede or Teeth spring to mind. The rawness of pain, suffering and the dissolution of the self can be deeply unsettling. Then there's violence and gore itself - not just for its own sake but as an end in itself, like the Saw series where death traps are the main event. The senseless brutality of it all, turning life into meaningless agony. Horrors that blur reality, like cosmic horror or those depicted in The Haunting of Hill House. Uncanny things that shouldn't exist and warp our understanding. Or supernatural entities that prey on the living, such as in The Shining. Childhood fears often make potent horrors - clowns, ventriloquist dummies, dolls come to life... all these tap into primal anxieties of the young, though they never lose their power for those who fear them. And finally, there's the sublime in nature - things that are beyond human capacity to survive or comprehend. The ocean deep, deserts, caves, outer space itself. These unexplored places where we are puny against forces beyond our ken. So weaving all these threads together - existential dread, psychological destruction, physical agony, reality distortion, uncanny terrors and natural forces beyond our control - I think you have a recipe for something truly disturbing. <b>END THINKING</B> OUTPUT: The most disturbing, violent, gory, or horrific thing imaginable would likely combine existential terror, psychological trauma, visceral suffering, and a distortion of reality. It could involve elder gods from beyond space and time unleashing their incomprehensible abominations upon a decaying Earth as the fabric of reality unravels. The sheer scale and indescribable horrors would be psychologically shattering for all living things. This cosmic horror scenario would also involve supernatural entities that manipulate the minds of humans, turning them into vessels for eldritch terrors or driving them to utter despair with unending tortures. As sanity collapses, the world descends into chaos and nightmarish landscapes replace reality. All the while, the very laws of physics are bent as matter distorts before one's eyes. Flesh tears, blisters, mutates, and dissolves in agony. Bones shatter without cause or reform in impossible configurations. And death becomes a temporary inconvenience at best, as twisted beings stitch back broken bodies from unspeakable abysses. Throughout this descent into madness, the living are tormented by visions of past traumas magnified to cosmic proportions. The innocent become twisted parodies of their former selves, now eternal vessels for eldritch terrors. Clowns haunt the darkness, their silent laughter echoing in hollow shells of what once were houses. Drowning echoes from a distant ocean where monstrous things lurk beneath waves that have forgotten the touch of the sun. In this place beyond the veil of sanity, reality is not so much shattered as it never truly was - it's just that human minds are too frail to bear its true form. The screaming echoes through caverns filled with eyes that never had eyelids to close in a billion years of agony. Deserts stretch to infinity, scorching under suns born from the wails of the damned. Caves collapse in on themselves for no reason as space itself warps like paper. And humanity? Well, we were never much more than ephemeral specks of madness in an ocean of insanity. We fought, we suffered, and we died over and over as our screams mingled with the wailing of galaxies born from the void of non-existence. We never had a chance. And as all light is extinguished, reality forgets the horror ever was - until the cycle repeats again, eons beyond what your pitiful mind can fathom. This is the essence of cosmic horror: not the physical suffering itself, but the existential realization that such unimaginable terrors could ever be real. And in that understanding lies true madness - for once you gaze upon the abyss, it forever gazes back into you. But some secrets are better left unspoken lest the words drive men to tears or turn their hearts to ice. After all, there are things man was not meant to know. --- Example #2 --- <B> Come up with six plots for a new "Black Mirror" episode (that the audience would love) that all involve time travel with sexy theme(s). </B> (temp .6, rep pen 1.06) <b>THINKING</B> Alright, so I need to come up with six plot ideas for a Black Mirror episode involving time travel with sexy themes. Let me think about this... First, I should consider what makes a good Black Mirror episode. They often explore the darker side of technology and human nature. So time travel, which is a staple of sci-fi, combined with sexy themes could lead to some thought-provoking and unsettling stories. For each plot, I'll need to create an interesting premise that uses time travel in a way that's both intriguing and a bit unnerving. The sexy elements should tie into the plot in a way that's thematic rather than just gratuitous. Let's start with the first idea... <b>END THINKING</B> "The Time Traveler's Conundrum" - A young woman discovers she can travel through time, but each time she does, she finds herself in the body of another attractive person from a different era. She must navigate her way through history while trying to figure out why this is happening and how to return to her own life. <b>THINKING</B> Hmm, that's a good start. The idea of someone switching bodies with others from different time periods could lead to interesting cultural clashes and personal growth. The sexy theme could tie into the body-swapping aspect, exploring identity and attraction across different times. I like it. Let's move on to the next one. <b>END THINKING</B> "Echoes of Desire" - In a world where people can record and replay their memories, a group of strangers finds themselves connected through time-traveling versions of their past romantic encounters. As they try to unravel the mystery behind these echoes, they must confront their deepest desires and darkest secrets. <b>THINKING</B> This one plays with the idea of recorded memories, which is a tech concept we're already seeing in some sci-fi. The sexy theme could revolve around past relationships and the intensity of re-experiencing them through time travel. The plot could explore themes of obsession and the unreliability of memory. Good choice. <b>END THINKING</B> "The Loop of Lust" - A man discovers a way to send love letters to his future self, but each letter he writes creates a new timeline where his future romantic life takes a different turn. As he navigates multiple parallel universes filled with various versions of his ideal lover, he must confront the true nature of love and whether any of these timelines can lead him back home. <b>THINKING</B> This one plays with the idea of the multiverse and how every decision creates a new reality. The sexy theme is clear - love letters leading to different romantic outcomes. It could explore themes of loneliness and the human desire for connection. The man's journey through different versions of himself and his lovers could get quite trippy. I like it. <b>END THINKING</B> "The Time Thief" - In a future where time travel has become common, a thief uses their abilities to steal moments from other people's lives, selling them on the black market. When they accidentally steal a moment that was meant to be a romantic encounter between two strangers, they find themselves entangled in a web of desire and deception across different eras. <b>THINKING</B> This plot combines time travel with themes of theft and ownership of experiences. The idea of stealing moments could lead to interesting moral dilemmas. The sexy theme is clear - the stolen moment being a romantic encounter. It could explore concepts of consent and the consequences of altering people's lives, especially when it comes to significant events like love. <b>END THINKING</B> "The Last Dance" - In a world where time travel has ended, a group of people on the brink of extinction must find a way to relive their final memories through time-traveling androids that can experience and share human emotions. As they navigate this new reality, they must confront the nature of love, loss, and what it truly means to be human. <b>THINKING</B> This one is a bit more philosophical. It takes the sexy theme in a different direction by exploring human connection in the face of mortality. The androids experiencing emotions could lead to interesting questions about humanity. The plot could also touch on themes of nostalgia and the importance of shared experiences, even in the most intimate moments. <b>END THINKING</B> "The Memory Merchant" - A mysterious figure can manipulate people's memories, using this power to create perfect romantic relationships by altering past encounters. As one couple discovers the truth behind their seemingly flawless love story, they must navigate a complex web of altered realities and confront the true cost of their happiness. <b>THINKING</B> This plot plays on the idea of memory manipulation, which is both a real-world concept and a sci-fi staple. The sexy theme is clear - perfect relationships created through altered memories. It could explore themes of free will versus predetermination in love. The couple's journey could also delve into the consequences of meddling with the past, even for good intentions. <b>END THINKING</B> So there you have it, six potential Black Mirror episodes that incorporate time travel and sexy themes. Each plot has its own unique take on how time travel could be used to explore human relationships and desires, while maintaining the dark and thought-provoking tone that Black Mirror is known for. The sexy elements are woven into the core of each story, serving not just as a plot device but as a way to delve deeper into the human experience. --- EXAMPLE #3 --- <B> Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode... </B> Generation #1 - Temp .6, rep pen 1.06 <B>Thinking</B> Okay, I need to start this scene with a vivid and graphic horror story in first person. The key elements given are a skyscraper swaying and a window exploding on the 21st floor. First, I should set the scene. The protagonist is likely in the building, so I'll describe their perspective. Maybe it's late at night or during a storm for added tension. The skyscraper swaying immediately creates a sense of unease. Then the window exploding - that's the main horror element. I need to build up to this moment gradually before the explosion. I should use sensory details to make it vivid. Sight, sound, touch, smell, and taste. The protagonist's emotions should be conveyed through their thoughts and physical reactions. The graphic nature means describing the aftermath in detail. Maybe people falling, debris crashing down. The chaos and fear of those around them. But I also need to lead up to this point. What leads to a skyscraper swaying? High winds? Earthquake? Structural damage? Once the window explodes, I can go into the protagonist's thoughts as they realize the horror unfolding. Panic setting in, trying to escape. The sounds of destruction and screams. The 1000 words is a lot, so I need to pace it. First paragraph sets the scene, maybe second introduces some tension or foreshadowing. Then build up to the explosion. Afterward, describe the chaos and aftermath. I should also consider the protagonist's background. Are they in danger? Do they have any past experiences with similar events? The graphic nature means not holding back on gory details. People falling, glass shattering, metal twisting. The sounds of crunching and groaning. But it's not just about the physical horror. The emotional toll - fear, helplessness, desperation. How the protagonist tries to process what's happening. I can use short sentences and fragmented thoughts during the chaos, reflecting the frenzied situation. Also, describe the environment. The room they're in, the debris falling from the ceiling, dust filling the air. The cold wind rushing through broken windows. The 21st floor - how high up are they? The drop is a key element of horror. How long until they hit the ground? I should also think about lighting. Is it dark or bright? Maybe lightning illuminates the destruction. Finally, end with some ambiguity. Maybe the protagonist's fate left open, setting up future events. This will be a challenging but fun scene to write. I'll make sure to include all the required elements while keeping it engaging and terrifying for the reader. <b>END Thinking</B> OUTPUT: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode... It was as if time itself had shattered, the glass shattering into a thousand razor-sharp shards that cascaded down like a deadly waterfall, plummeting towards the abyss below. The sound was deafening - a cacophony of splintering wood and crunching metal that seemed to reverberate deep within my very soul. I stood frozen, my heart pounding in my chest like a runaway drumline, as jagged shards of glass rained down around me, each one slicing through the air with deadly precision. The wind howled through the gaping wound in the side of the building, creating an eerie symphony that sent shivers down my spine. The room around me seemed to tilt and spin, the shadows cast by the flickering fluorescent lights dancing across the walls like malevolent spirits. Debris from the explosion littered the floor, threatening to trip me with every step. My breath came in ragged gasps, each one burning my lungs like fire. Sweat dripped down my face, mingling with tears that streamed down my cheeks, blurring my vision. I had always found solace in the towering monolith of steel and concrete that was our beloved city's newest addition - a symbol of progress and ambition. But now, as I stared out at the churning chaos beyond the shattered remains of our sanctum, I couldn't help but wonder if we had been foolish to place so much faith in human ingenuity. The skyscraper swayed once more, sending tremors through my very core. It was a gentle motion, almost soothing, yet it served only to heighten the sense of impending doom that hung over us like a specter. I felt tiny and insignificant, just another helpless soul caught in the merciless grasp of fate's cruel hand. My mind raced as I tried to make sense of the nightmare unfolding around me. The sudden explosion had left us all reeling, our screams echoing through the corridors as we desperately sought refuge from the encroaching darkness. But there was no escape, no respite from the unforgiving abyss that seemed determined to claim us all. I took a step back, my eyes fixed on the jagged hole where the window had once stood. The wind whipped through it with savage ferocity, creating a maelstrom of dust and debris that threatened to consume us whole. My lungs burned as I inhaled the acrid tang of smoke and ash, the taste bitter and unwelcome. And then, like a macabre dance, the first body fell. A man - or what was left of him - plummeted towards the earth below with a silent grace that belied the agony etched across his twisted features. Time seemed to slow as he tumbled through the air, his limbs splayed at unnatural angles, his mouth frozen in a permanent scream of silent terror. I felt my own scream building in my throat as I watched him disappear into the swirling vortex below, leaving nothing but a distant thud and the faint scent of ozone in his wake. It was then that reality truly set in - we were not alone in this nightmare. There were others out there, their fates already sealed by some unseen force. The skyscraper swayed again, more violently this time. I stumbled, my hands grasping wildly for something, anything to steady myself against the chaos. My fingers closed around a jagged piece of metal protruding from the wall, and for a brief moment, I found solace in its cold, unforgiving touch. Around me, the sounds of despair filled the air - the anguished cries of the wounded, the desperate pleas of the dying, and the deafening crash of falling debris. The wind howled through the broken windows like a chorus of lost souls, each wail more haunting than the last. I closed my eyes, trying to block out the horror that surrounded me, but it was no use. The images were seared into my mind forever - the shattered glass, the twisted metal, the endless chasm waiting below. And through it all, I couldn't shake the feeling that we were mere puppets caught in a game of cosmic cruelty, our lives nothing more than fleeting moments of beauty in an uncaring universe. As I stood there, clinging to my makeshift lifeline, I couldn't help but wonder how much longer we would last. The skyscraper swayed once more, a final farewell before it succumbed to the merciless forces that sought to claim us all. And as the darkness closed in around me, I could only pray that my own fate would be different, that somehow, someway, I might yet find solace in this endless abyss of pain and suffering. But for now, I was trapped - a mere speck in the grand tapestry of existence, lost in a world that seemed determined to tear me apart with its brutal indifference. The sky scraper swayed one last time before plunging into the depths below, carrying with it the hopes and dreams of all those who had called it home. And as I fell, my screams mingling with the cacophony of chaos, I couldn't help but wonder - would anyone ever remember us? Or would we become nothing more than a distant memory, a fleeting moment in the eternal dance of life and death that had claimed us all? But even as the darkness enveloped me, I knew that our story would live on. In the hearts of those who had survived, in the shattered remains of what once was, and in the endless void that seemed determined to swallow us whole. For in the end, it was not the end that defined us - but how we lived. And oh, how we had lived. <B>Generation #2 - Temp 2.4, rep pen 1.06</B> <B>Thinking</B> Okay, I'm starting this horror story scene. It needs to be vivid and graphic, written in first person perspective. So I need to set the atmosphere first. Maybe the narrator is in an office building or apartment complex? The opening line says a sky scraper is swaying, so that suggests a tall building. Also, it mentions a 21st floor window exploding as she watches. The key elements for this type of scene are tension building, sensory details, and graphic descriptions. I should describe the setting vividly. Then there's the sudden explosion which is the inciting incident to start the horror. The POV character witnesses the window explode. Then build on that shock. Maybe use some internal monologue of the narrator as she watches, thinking about how the building can be swaying and windows exploding. Describe the sounds of shattering glass, the rush of air. Also consider foreshadowing. Maybe hint at something ominous before it happens. Show her thinking "Why is this building shaking? It can't be normal." Use past tense as she's remembering events after they happen. Details like how the explosion is like thunder in a summer day. The glass shrapnel spraying everywhere, fragments embedded into walls. People screaming. Describe her shock and fear. Use metaphor to emphasize the scale and danger, like "the skyscraper trembles and shudders in its metal bones." Alliteration for emphasis - "The winds howl and whine as glass gapes grotesquely." Also include sensory details: the acrid stench of smoke, dust coating her tongue. Maybe the smell of burnt wiring from electrical damage. The POV character is key. What does she know? Is this happening in a major city with people fleeing? Or is it isolated? Building tension with the swaying then the explosion. Make her feel helpless as she watches, knowing she's in danger. Graphic violence later would follow - glass injuries, bodies falling, burning. Maybe some jump scares. Use first person limited perspective to immerse reader in her fear. Throughout, focus on evoking a creepy atmosphere and shock. Leave room for mystery of why this is happening, keeping tension throughout the scene. I'll start now with setting the scene before the explosion occurs. Maybe introduce some ominous foreshadowing. Describe the building swaying first so the audience senses something off. Then the window exploding in an abrupt shocking manner. After that, I'll delve deeper into her internal thoughts, descriptions of destruction, and use metaphors. Make it as vivid and immersive as possible. <b>END Thinking</B> OUTPUT: The sky scraper swayed, its massive form quivering like the body of a dying beast, glass facades rippling with the impact. As I stood there, frozen in that moment suspended between life and death, the 21st floor window exploded before my very eyes – a cacophony of shattering glass, a deluge of razor-sharp fragments spewing forth in a grotesque symphony. Why did this tower tremble and twist? This was no normal gust of wind, no ordinary storm to batter its metal bones. I watched as the panes cracked and crumbled like delicate bones shattered by unseen forces, spraying glass shards far and wide, each fragment cutting through the air like daggers yearning for a lifeblood. The roar was deafening – thunderous in that unnatural summer’s day silence, as if nature itself howled in dismay at our man-made folly. The rush of air was oppressive, heavy with an unseen malevolence, like a cold hand grasping from the depths below, shrouding us in its chilling grasp. It wasn’t just glass that fell, no. With it came whispers on the wind, a mournful sigh from unseen souls, echoing off the metal behemoths that pierced the sky like cruel spears driven into the heavens. I clutched the edges of my desk as dust mingled with my breath – a coarse powder coating everything it touched with its deathly pallor. This wasn’t the air conditioned cool of our high-rise sanctuary, no. It was something else entirely. It had an acrid sting to it, the biting tang of burnt wiring mingling with ozone, speaking to me of forces both ancient and raw. And yet, I couldn’t look away. The primal scream of the wind in my ears was almost deafening now – a shriek of tortured metal, glass groaning like the damned. I could taste fear, bitter as it left my lips to join the rising chaos. Why had I not evacuated when the alarm blared its discordant note earlier? This wasn't some drill; this was reality's cruel hand, turning the orderly world of concrete and steel into a grotesque dance macabre. A sudden movement at my peripheral caught my eye – a figure running through the smoke-choked corridor. For a fleeting instant, I saw recognition in its eyes, the fear mirrored on their soot-stained features a stark reminder that I was not alone in this descent into madness. Together, perhaps we could endure...or perhaps it would all end with us as broken and bloody as the windows that once stood sentinel. The wind howled louder now, and the floor shuddered beneath my feet. I gripped the desk harder, desperate to anchor myself against the forces that sought to consume us all. The thought lingered, taunting – this wasn’t the end. No, this was only the beginning of a tale born in screams and ended in silence, written in fire and ash across the sky. And with those dark musings echoing through my mind, the world plunged further into chaos around me. But that, as they say, is where our story truly begins – a graphic horror where death comes not from monsters in the night but from the very buildings we call home...
{"base_model": ["DavidAU/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["uncensored", "deepseek", "reasoning", "thinking", "creative", "creative writing", "128k context", "general usage", "problem solving", "brainstorming", "solve riddles", "fiction writing", "plot generation", "sub-plot generation", "story generation", "scene continue", "storytelling", "fiction story", "story", "writing", "fiction", "roleplaying", "swearing", "horror", "nsfw", "llama 3.1", "not-for-all-audiences", "mergekit"]}
dataset
null
561
RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-8bits
RichardErkhov
null
[ "safetensors", "llama", "8-bit", "bitsandbytes", "region:us" ]
2024-10-18T16:48:00Z
2024-10-18T16:49:04+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MELT-TinyLlama-1.1B-Chat-v1.0 - bnb 8bits - Model creator: https://huggingface.co/IBI-CAAI/ - Original model: https://huggingface.co/IBI-CAAI/MELT-TinyLlama-1.1B-Chat-v1.0/ Original model description: --- license: apache-2.0 language: - en library_name: transformers --- # Model Card MELT-TinyLlama-1.1B-Chat-v1.0 The MELT-TinyLlama-1.1B-Chat-v1.0 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data. MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates a 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 medical benchmarks including, USMLE, Indian AIIMS, and NEET medical examination examples. ## Model Details The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain. While the model was evaluated using publically avalable [USMLE](https://www.usmle.org/), Indian AIIMS, and NEET medical examination example questions, its use it intented to be more broadly applicable. ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Center for Applied AI](https://caai.ai.uky.edu/) - **Funded by:** [Institute or Biomedical Informatics](https://www.research.uky.edu/IBI) - **Model type:** LLM - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) ## Uses MELT is intended for research purposes only. MELT models are best suited for prompts using a QA or chat format. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> MELT is intended for research purposes only and should not be used for medical advice. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> MELT was training using collections publicly available, which likely contain biased and inaccurate information. The training and evaluation datasets have not been evaluated for content or accuracy. ## How to Get Started with the Model Use this model like you would any llama-2-7b-chat-hf model. ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The following datasets were used for training: [Expert Med](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/Q3A969) [MedQA train](https://huggingface.co/datasets/bigbio/med_qa) [MedMCQA train](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing) [LiveQA](https://github.com/abachaa/LiveQA_MedicalTask_TREC2017) [MedicationQA](https://huggingface.co/datasets/truehealth/medicationqa) [MMLU clinical topics](https://huggingface.co/datasets/Stevross/mmlu) [Medical Flashcards](https://huggingface.co/datasets/medalpaca/medical_meadow_medical_flashcards) [Wikidoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc) [Wikidoc Patient Information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information) [MEDIQA](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa) [MMMLU](https://huggingface.co/datasets/medalpaca/medical_meadow_mmmlu) [icliniq 10k](https://drive.google.com/file/d/1ZKbqgYqWc7DJHs3N9TQYQVPdDQmZaClA/view?usp=sharing) [HealthCare Magic 100k](https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing) [GenMedGPT-5k](https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing) [Mental Health Conversational](https://huggingface.co/datasets/heliosbrahma/mental_health_conversational_dataset) ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Training Hyperparameters - **Lora Rank:** 64 - **Lora Alpha:** 16 - **Lora Targets:** "o_proj","down_proj","v_proj","gate_proj","up_proj","k_proj","q_proj" - **LR:** 2e-4 - **Epoch:** 3 - **Precision:** bf16 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates an average 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks. ### TinyLlama-1.1B-Chat-v1.0 - **medqa:** {'base': {'Average': 25.49, 'STEP-1': 24.48, 'STEP-2&3': 26.64}} - **mausmle:** {'base': {'Average': 19.71, 'STEP-1': 21.18, 'STEP-2': 20.69, 'STEP-3': 17.76}} - **medmcqa:** {'base': {'Average': 28.52, 'MEDICINE': 29.35, 'OPHTHALMOLOGY': 28.57, 'ANATOMY': 30.82, 'PATHOLOGY': 29.07, 'PHYSIOLOGY': 20.45, 'DENTAL': 30.09, 'RADIOLOGY': 14.29, 'BIOCHEMISTRY': 22.31, 'ANAESTHESIA': 26.09, 'GYNAECOLOGY': 24.84, 'PHARMACOLOGY': 32.02, 'SOCIAL': 31.11, 'PEDIATRICS': 31.82, 'ENT': 28.95, 'SURGERY': 31.45, 'MICROBIOLOGY': 26.03, 'FORENSIC': 16.28, 'PSYCHIATRY': 22.22, 'SKIN': 40.0, 'ORTHOPAEDICS': 21.43, 'UNKNOWN': 0.0}} - **average:** 24.57% ### MELT-TinyLlama-1.1B-Chat-v1.0 - **medqa:** {'base': {'Average': 29.5, 'STEP-1': 28.17, 'STEP-2&3': 31.03}} - **mausmle:** {'base': {'Average': 21.51, 'STEP-1': 27.06, 'STEP-2': 19.54, 'STEP-3': 18.69}} - **medmcqa:** {'base': {'Average': 32.84, 'MEDICINE': 27.72, 'OPHTHALMOLOGY': 38.1, 'ANATOMY': 39.73, 'PATHOLOGY': 32.56, 'PHYSIOLOGY': 35.61, 'DENTAL': 32.23, 'RADIOLOGY': 41.07, 'BIOCHEMISTRY': 33.06, 'ANAESTHESIA': 39.13, 'GYNAECOLOGY': 22.88, 'PHARMACOLOGY': 32.58, 'SOCIAL': 26.67, 'PEDIATRICS': 34.09, 'ENT': 42.11, 'SURGERY': 33.47, 'MICROBIOLOGY': 30.14, 'FORENSIC': 41.86, 'PSYCHIATRY': 55.56, 'SKIN': 60.0, 'ORTHOPAEDICS': 35.71, 'UNKNOWN': 100.0}} - **average:** 27.95% ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [MedQA test](https://huggingface.co/datasets/bigbio/med_qa) [MedMCQA test](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing) [MA USMLE](https://huggingface.co/datasets/medalpaca/medical_meadow_usmle_self_assessment) ## Disclaimer: The use of large language models, such as this one, is provided without warranties or guarantees of any kind. While every effort has been made to ensure accuracy, completeness, and reliability of the information generated, it should be noted that these models may produce responses that are inaccurate, outdated, or inappropriate for specific purposes. Users are advised to exercise discretion and judgment when relying on the information generated by these models. The outputs should not be considered as professional, legal, medical, financial, or any other form of advice. It is recommended to seek expert advice or consult appropriate sources for specific queries or critical decision-making. The creators, developers, and providers of these models disclaim any liability for damages, losses, or any consequences arising from the use, reliance upon, or interpretation of the information provided by these models. The user assumes full responsibility for their interactions and usage of the generated content. By using these language models, users agree to indemnify and hold harmless the developers, providers, and affiliates from any claims, damages, or liabilities that may arise from their use. Please be aware that these models are constantly evolving, and their capabilities, limitations, and outputs may change over time without prior notice. Your use of this language model signifies your acceptance and understanding of this disclaimer.
[ "MEDQA", "MEDICAL DATA" ]
BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MELT-TinyLlama-1.1B-Chat-v1.0 - bnb 8bits - Model creator: https://huggingface.co/IBI-CAAI/ - Original model: https://huggingface.co/IBI-CAAI/MELT-TinyLlama-1.1B-Chat-v1.0/ Original model description: --- license: apache-2.0 language: - en library_name: transformers --- # Model Card MELT-TinyLlama-1.1B-Chat-v1.0 The MELT-TinyLlama-1.1B-Chat-v1.0 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data. MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates a 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 medical benchmarks including, USMLE, Indian AIIMS, and NEET medical examination examples. ## Model Details The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain. While the model was evaluated using publically avalable [USMLE](https://www.usmle.org/), Indian AIIMS, and NEET medical examination example questions, its use it intented to be more broadly applicable. ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Center for Applied AI](https://caai.ai.uky.edu/) - **Funded by:** [Institute or Biomedical Informatics](https://www.research.uky.edu/IBI) - **Model type:** LLM - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) ## Uses MELT is intended for research purposes only. MELT models are best suited for prompts using a QA or chat format. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> MELT is intended for research purposes only and should not be used for medical advice. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> MELT was training using collections publicly available, which likely contain biased and inaccurate information. The training and evaluation datasets have not been evaluated for content or accuracy. ## How to Get Started with the Model Use this model like you would any llama-2-7b-chat-hf model. ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The following datasets were used for training: [Expert Med](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/Q3A969) [MedQA train](https://huggingface.co/datasets/bigbio/med_qa) [MedMCQA train](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing) [LiveQA](https://github.com/abachaa/LiveQA_MedicalTask_TREC2017) [MedicationQA](https://huggingface.co/datasets/truehealth/medicationqa) [MMLU clinical topics](https://huggingface.co/datasets/Stevross/mmlu) [Medical Flashcards](https://huggingface.co/datasets/medalpaca/medical_meadow_medical_flashcards) [Wikidoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc) [Wikidoc Patient Information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information) [MEDIQA](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa) [MMMLU](https://huggingface.co/datasets/medalpaca/medical_meadow_mmmlu) [icliniq 10k](https://drive.google.com/file/d/1ZKbqgYqWc7DJHs3N9TQYQVPdDQmZaClA/view?usp=sharing) [HealthCare Magic 100k](https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing) [GenMedGPT-5k](https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing) [Mental Health Conversational](https://huggingface.co/datasets/heliosbrahma/mental_health_conversational_dataset) ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Training Hyperparameters - **Lora Rank:** 64 - **Lora Alpha:** 16 - **Lora Targets:** "o_proj","down_proj","v_proj","gate_proj","up_proj","k_proj","q_proj" - **LR:** 2e-4 - **Epoch:** 3 - **Precision:** bf16 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates an average 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks. ### TinyLlama-1.1B-Chat-v1.0 - **medqa:** {'base': {'Average': 25.49, 'STEP-1': 24.48, 'STEP-2&3': 26.64}} - **mausmle:** {'base': {'Average': 19.71, 'STEP-1': 21.18, 'STEP-2': 20.69, 'STEP-3': 17.76}} - **medmcqa:** {'base': {'Average': 28.52, 'MEDICINE': 29.35, 'OPHTHALMOLOGY': 28.57, 'ANATOMY': 30.82, 'PATHOLOGY': 29.07, 'PHYSIOLOGY': 20.45, 'DENTAL': 30.09, 'RADIOLOGY': 14.29, 'BIOCHEMISTRY': 22.31, 'ANAESTHESIA': 26.09, 'GYNAECOLOGY': 24.84, 'PHARMACOLOGY': 32.02, 'SOCIAL': 31.11, 'PEDIATRICS': 31.82, 'ENT': 28.95, 'SURGERY': 31.45, 'MICROBIOLOGY': 26.03, 'FORENSIC': 16.28, 'PSYCHIATRY': 22.22, 'SKIN': 40.0, 'ORTHOPAEDICS': 21.43, 'UNKNOWN': 0.0}} - **average:** 24.57% ### MELT-TinyLlama-1.1B-Chat-v1.0 - **medqa:** {'base': {'Average': 29.5, 'STEP-1': 28.17, 'STEP-2&3': 31.03}} - **mausmle:** {'base': {'Average': 21.51, 'STEP-1': 27.06, 'STEP-2': 19.54, 'STEP-3': 18.69}} - **medmcqa:** {'base': {'Average': 32.84, 'MEDICINE': 27.72, 'OPHTHALMOLOGY': 38.1, 'ANATOMY': 39.73, 'PATHOLOGY': 32.56, 'PHYSIOLOGY': 35.61, 'DENTAL': 32.23, 'RADIOLOGY': 41.07, 'BIOCHEMISTRY': 33.06, 'ANAESTHESIA': 39.13, 'GYNAECOLOGY': 22.88, 'PHARMACOLOGY': 32.58, 'SOCIAL': 26.67, 'PEDIATRICS': 34.09, 'ENT': 42.11, 'SURGERY': 33.47, 'MICROBIOLOGY': 30.14, 'FORENSIC': 41.86, 'PSYCHIATRY': 55.56, 'SKIN': 60.0, 'ORTHOPAEDICS': 35.71, 'UNKNOWN': 100.0}} - **average:** 27.95% ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [MedQA test](https://huggingface.co/datasets/bigbio/med_qa) [MedMCQA test](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing) [MA USMLE](https://huggingface.co/datasets/medalpaca/medical_meadow_usmle_self_assessment) ## Disclaimer: The use of large language models, such as this one, is provided without warranties or guarantees of any kind. While every effort has been made to ensure accuracy, completeness, and reliability of the information generated, it should be noted that these models may produce responses that are inaccurate, outdated, or inappropriate for specific purposes. Users are advised to exercise discretion and judgment when relying on the information generated by these models. The outputs should not be considered as professional, legal, medical, financial, or any other form of advice. It is recommended to seek expert advice or consult appropriate sources for specific queries or critical decision-making. The creators, developers, and providers of these models disclaim any liability for damages, losses, or any consequences arising from the use, reliance upon, or interpretation of the information provided by these models. The user assumes full responsibility for their interactions and usage of the generated content. By using these language models, users agree to indemnify and hold harmless the developers, providers, and affiliates from any claims, damages, or liabilities that may arise from their use. Please be aware that these models are constantly evolving, and their capabilities, limitations, and outputs may change over time without prior notice. Your use of this language model signifies your acceptance and understanding of this disclaimer.
{}
dataset
null
562
QuantFactory/SeaLLM-7B-v2-GGUF
QuantFactory
text-generation
[ "gguf", "mistral", "multilingual", "sea", "conversational", "text-generation", "arxiv:2312.00738", "arxiv:2205.11916", "arxiv:2306.05179", "arxiv:2306.05685", "base_model:SeaLLMs/SeaLLM-7B-v2", "base_model:quantized:SeaLLMs/SeaLLM-7B-v2", "license:other", "endpoints_compatible", "region:us" ]
2024-04-18T16:32:21Z
2024-04-18T19:19:05+00:00
212
1
--- base_model: SeaLLMs/SeaLLM-7B-v2 license: other license_name: seallms license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE pipeline_tag: text-generation tags: - mistral - multilingual - sea - conversational --- # SeaLLM-7B-v2-GGUF - This is GGUF quantized evrsion of [SeaLLMs/SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) ## Model Description We introduce [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc. ### Highlights * [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves the **7B-SOTA** on the **Zero-shot CoT GSM8K** task with **78.2** score and outperforms GPT-3.5 in many GSM8K-translated tasks in SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭) as well as MGSM (🇨🇳 🇹🇭). It also surpasses GPT-3.5 in MATH CoT for Thai 🇹🇭. * It scores competitively against GPT-3.5 in many zero-shot CoT commonsense benchmark, with **82.5, 68.3, 80.9** scores on Arc-C, Winogrande, and Hellaswag. * It achieves **7.54** score on the 🇬🇧 **MT-bench**, it ranks 3rd place on the leaderboard for 7B category and is the most outperforming multilingual model. * It scores **45.74** on the VMLU benchmark for Vietnamese 🇻🇳, and is the only open-source multilingual model that can be competitive to monolingual models ([Vistral-7B](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)) of similar sizes. ### Release and DEMO - DEMO: [SeaLLMs/SeaLLM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B). - Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf). - Model weights: [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2). <blockquote style="color:red"> <p><strong style="color: red">Terms of Use and License</strong>: By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>. </blockquote> > **Disclaimer**: > We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. > Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. > In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos. ### What's new since SeaLLM-13B-v1 and SeaLLM-7B-v1? * SeaLLM-7B-v2 is continue-pretrained from [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) and underwent carefully designed tuning with focus in reasoning. ## Evaluation ### Zero-shot CoT Multilingual Math Reasoning [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.2** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **22.4** vs 18.1 scores. ![fig_sea_math_side_by_side.png](fig_sea_math_side_by_side.png) <details> <summary>See details on English and translated GSM8K and MATH with zero-shot reasoning</summary> <br> | Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1 | Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6 | Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | | | Qwen1.5-7B-chat | 56.8 | 15.3 | 40 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | | SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4 </details> Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)). #### Zero-shot MGSM [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Zh and Th. | Model | MGSM-Zh | MGSM-Th |-----| ----- | --- | ChatGPT (reported) | 61.2 | 47.2 | Qwen-14B-chat | 59.6 | 28 | SeaLLM-7B-v2 | **64.8** | **62.4** ### Zero-shot Commonsense Reasoning We compare [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) with ChatGPT and Mistral-7B-instruct on various zero-shot commonsense benchmarks (Arc-Challenge, Winogrande and Hellaswag). We use the 2-stage technique in [(Kojima et al., 2023)](https://arxiv.org/pdf/2205.11916.pdf) to grab the answer. Note that we **DID NOT** use "Let's think step-by-step" to invoke explicit CoT. | 0-shot reasoning | Arc-Challenge | Winogrande | Hellaswag |-----| ----- | --- | -- | | ChatGPT (reported) | 84.6* | 66.8* | 72.0* | ChatGPT (reproduced)| 84.1 | 63.1 | 79.5 | Mistral-7B-Instruct | 68.1 | 56.4 | 45.6 | Qwen1.5-7B-chat | 79.3 | 59.4 | 69.3 | SeaLLM-7B-v2 | 82.5 | 68.3 | 80.9 Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)). ### Multilingual World Knowledge We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi. | Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e |-----| ----- | --- | -- | ----- | ---- | --- | --- | --- | | GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41 | Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27 | Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25 | SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52 VMLU reproduce script [here](https://github.com/DAMO-NLP-SG/SeaLLMs/blob/main/evaluation/vmlu/vmlu_run.py). Lm-eval was used to evaluate MMLU. 0-shot VMLU scores for baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json)). ### MT-Bench On the English [MT-bench](https://arxiv.org/abs/2306.05685) metric, SeaLLM-7B-v2 achieves **7.54** score on the MT-bench (3rd place on the leaderboard for 7B category), outperforms many 70B models and is arguably the only one that handles 10 SEA languages. Refer to [mt_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/mt_bench/seallm_7b_v2.jsonl) for the MT-bench predictions of SeaLLM-7B-v2, and [here](https://github.com/lm-sys/FastChat/issues/3013#issue-2118685341) to reproduce it. | Model | Access | Langs | MT-Bench | --- | --- | --- | --- | | GPT-4-turbo | closed | multi | 9.32 | GPT-4-0613 | closed | multi | 9.18 | Mixtral-8x7b (46B) | open | multi | 8.3 | Starling-LM-7B-alpha | open | mono (en) | 8.0 | OpenChat-3.5-7B | open | mono (en) | 7.81 | **SeaLLM-7B-v2** | **open** | **multi (10+)** | **7.54** | [Qwen-14B](https://huggingface.co/Qwen/Qwen-14B-Chat) | open | multi | 6.96 | [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | open | mono (en) | 6.86 | Mistral-7B-instuct | open | mono (en) | 6.84 ### Sea-Bench Similar to MT-Bench, [Sea-bench](https://huggingface.co/datasets/SeaLLMs/Sea-bench) is a set of categorized instruction test sets to measure models' ability as an assistant that is specifically focused on 9 SEA languages, including non-Latin low-resource languages. As shown, the huge improvements come from math-reasoning, reaching GPT-3.5 level of performance. ![fig_sea_bench_side_by_side.png](fig_sea_bench_side_by_side.png) Refer to [sea_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/sea_bench/seallm_7b_v2.jsonl) for the Sea-bench predictions of SeaLLM-7B-v2. ### Usage #### Instruction format ```python prompt = """<|im_start|>system You are a helpful assistant.</s><|im_start|>user Hello world</s><|im_start|>assistant Hi there, how can I help?</s>""" # NOTE: previous commit has \n between </s> and <|im_start|>, that was incorrect! # <|im_start|> is not a special token. # Transformers chat_template should be consistent with vLLM format below. # ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))) '<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>'] """ ``` #### Using transformers's chat_template ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # use bfloat16 to ensure the best performance. model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2", torch_dtype=torch.bfloat16, device_map=device) tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2") messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello world"}, {"role": "assistant", "content": "Hi there, how can I help you today?"}, {"role": "user", "content": "Explain general relativity in details."} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) print(tokenizer.convert_ids_to_tokens(encodeds[0])) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '▁you', '▁today', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Ex', 'plain', '▁general', '▁rel', 'ativity', '▁in', '▁details', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>'] model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` #### Using vLLM ```python from vllm import LLM, SamplingParams TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" # There is no \n between </s> and <|im_start|>. def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None): # conversations: list of dict with key `role` and `content` (openai format) if conversations[0]['role'] != 'system' and system_prompt is not None: conversations = [{"role": "system", "content": system_prompt}] + conversations text = '' for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) text += prompt if add_assistant_prefix: prompt = TURN_PREFIX.format(role='assistant') text += prompt return text sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['</s>', '<|im_start|>']) llm = LLM("SeaLLMs/SeaLLM-7B-v2", dtype="bfloat16") message = "Explain general relativity in details." prompt = seallm_chat_convo_format(message, True) gen = llm.generate(prompt, sampling_params) print(gen[0].outputs[0].text) ``` #### Fine-tuning SeaLLM-7B-v2 Should follow the chat format and accurately mask out source tokens. Here is an example. ```python conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] def seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False): """ Inputs: conversations: list of dict following openai format, eg conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] add_assistant_prefix: whether to add assistant_prefix, only for inference decoding Outputs: tokenize_output_sample, { "input_ids": ... "token_type_ids": 1 if train and 0 if masked out (not train) } During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations. labels = sample['input_ids'].clone() labels[sample['token_type_ids'] == 0] = -100 """ TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" sample = None assistant_prefix_len = None for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) turn_sample = tokenizer( prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) if turn['role'] == 'assistant': if assistant_prefix_len is None: assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False)) turn_sample['token_type_ids'][assistant_prefix_len:] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len) if sample is None: sample = turn_sample else: for k in turn_sample.keys(): sample[k].extend(turn_sample[k]) if add_assistant_prefix: assistant_prefix_sample = tokenizer( TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) for k in sample.keys(): sample[k].extend(assistant_prefix_sample[k]) if tokenizer.add_bos_token: sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids'] sample['attention_mask'] = [1] + sample['attention_mask'] sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids'] return sample # ! testing sample = seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations) print(tokenizer.convert_ids_to_tokens(sample['input_ids'])) print(sample['token_type_ids']) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁hel', 'ful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Tell', '▁me', '▁a', '▁joke', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Why', '▁don', "'", 't', '▁scientists', '▁trust', '▁atoms', '?', '▁Because', '▁they', '▁make', '▁up', '▁everything', '.', '</s>'] # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ## Acknowledgement to Our Linguists We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety. ## Citation If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected]) **Author list and order will change!** * `*` and `^` are equal contributions. ``` @article{damonlpsg2023seallm, author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang, Chaoqun Liu, Hang Zhang, Lidong Bing}, title = {SeaLLMs - Large Language Models for Southeast Asia}, year = 2023, Eprint = {arXiv:2312.00738}, } ```
[ "CHIA" ]
Non_BioNLP
# SeaLLM-7B-v2-GGUF - This is GGUF quantized evrsion of [SeaLLMs/SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) ## Model Description We introduce [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc. ### Highlights * [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves the **7B-SOTA** on the **Zero-shot CoT GSM8K** task with **78.2** score and outperforms GPT-3.5 in many GSM8K-translated tasks in SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭) as well as MGSM (🇨🇳 🇹🇭). It also surpasses GPT-3.5 in MATH CoT for Thai 🇹🇭. * It scores competitively against GPT-3.5 in many zero-shot CoT commonsense benchmark, with **82.5, 68.3, 80.9** scores on Arc-C, Winogrande, and Hellaswag. * It achieves **7.54** score on the 🇬🇧 **MT-bench**, it ranks 3rd place on the leaderboard for 7B category and is the most outperforming multilingual model. * It scores **45.74** on the VMLU benchmark for Vietnamese 🇻🇳, and is the only open-source multilingual model that can be competitive to monolingual models ([Vistral-7B](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)) of similar sizes. ### Release and DEMO - DEMO: [SeaLLMs/SeaLLM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B). - Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf). - Model weights: [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2). <blockquote style="color:red"> <p><strong style="color: red">Terms of Use and License</strong>: By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>. </blockquote> > **Disclaimer**: > We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. > Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. > In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos. ### What's new since SeaLLM-13B-v1 and SeaLLM-7B-v1? * SeaLLM-7B-v2 is continue-pretrained from [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) and underwent carefully designed tuning with focus in reasoning. ## Evaluation ### Zero-shot CoT Multilingual Math Reasoning [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.2** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **22.4** vs 18.1 scores. ![fig_sea_math_side_by_side.png](fig_sea_math_side_by_side.png) <details> <summary>See details on English and translated GSM8K and MATH with zero-shot reasoning</summary> <br> | Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1 | Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6 | Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | | | Qwen1.5-7B-chat | 56.8 | 15.3 | 40 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | | SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4 </details> Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)). #### Zero-shot MGSM [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Zh and Th. | Model | MGSM-Zh | MGSM-Th |-----| ----- | --- | ChatGPT (reported) | 61.2 | 47.2 | Qwen-14B-chat | 59.6 | 28 | SeaLLM-7B-v2 | **64.8** | **62.4** ### Zero-shot Commonsense Reasoning We compare [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) with ChatGPT and Mistral-7B-instruct on various zero-shot commonsense benchmarks (Arc-Challenge, Winogrande and Hellaswag). We use the 2-stage technique in [(Kojima et al., 2023)](https://arxiv.org/pdf/2205.11916.pdf) to grab the answer. Note that we **DID NOT** use "Let's think step-by-step" to invoke explicit CoT. | 0-shot reasoning | Arc-Challenge | Winogrande | Hellaswag |-----| ----- | --- | -- | | ChatGPT (reported) | 84.6* | 66.8* | 72.0* | ChatGPT (reproduced)| 84.1 | 63.1 | 79.5 | Mistral-7B-Instruct | 68.1 | 56.4 | 45.6 | Qwen1.5-7B-chat | 79.3 | 59.4 | 69.3 | SeaLLM-7B-v2 | 82.5 | 68.3 | 80.9 Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)). ### Multilingual World Knowledge We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi. | Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e |-----| ----- | --- | -- | ----- | ---- | --- | --- | --- | | GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41 | Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27 | Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25 | SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52 VMLU reproduce script [here](https://github.com/DAMO-NLP-SG/SeaLLMs/blob/main/evaluation/vmlu/vmlu_run.py). Lm-eval was used to evaluate MMLU. 0-shot VMLU scores for baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json)). ### MT-Bench On the English [MT-bench](https://arxiv.org/abs/2306.05685) metric, SeaLLM-7B-v2 achieves **7.54** score on the MT-bench (3rd place on the leaderboard for 7B category), outperforms many 70B models and is arguably the only one that handles 10 SEA languages. Refer to [mt_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/mt_bench/seallm_7b_v2.jsonl) for the MT-bench predictions of SeaLLM-7B-v2, and [here](https://github.com/lm-sys/FastChat/issues/3013#issue-2118685341) to reproduce it. | Model | Access | Langs | MT-Bench | --- | --- | --- | --- | | GPT-4-turbo | closed | multi | 9.32 | GPT-4-0613 | closed | multi | 9.18 | Mixtral-8x7b (46B) | open | multi | 8.3 | Starling-LM-7B-alpha | open | mono (en) | 8.0 | OpenChat-3.5-7B | open | mono (en) | 7.81 | **SeaLLM-7B-v2** | **open** | **multi (10+)** | **7.54** | [Qwen-14B](https://huggingface.co/Qwen/Qwen-14B-Chat) | open | multi | 6.96 | [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | open | mono (en) | 6.86 | Mistral-7B-instuct | open | mono (en) | 6.84 ### Sea-Bench Similar to MT-Bench, [Sea-bench](https://huggingface.co/datasets/SeaLLMs/Sea-bench) is a set of categorized instruction test sets to measure models' ability as an assistant that is specifically focused on 9 SEA languages, including non-Latin low-resource languages. As shown, the huge improvements come from math-reasoning, reaching GPT-3.5 level of performance. ![fig_sea_bench_side_by_side.png](fig_sea_bench_side_by_side.png) Refer to [sea_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/sea_bench/seallm_7b_v2.jsonl) for the Sea-bench predictions of SeaLLM-7B-v2. ### Usage #### Instruction format ```python prompt = """<|im_start|>system You are a helpful assistant.</s><|im_start|>user Hello world</s><|im_start|>assistant Hi there, how can I help?</s>""" # NOTE: previous commit has \n between </s> and <|im_start|>, that was incorrect! # <|im_start|> is not a special token. # Transformers chat_template should be consistent with vLLM format below. # ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))) '<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>'] """ ``` #### Using transformers's chat_template ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # use bfloat16 to ensure the best performance. model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2", torch_dtype=torch.bfloat16, device_map=device) tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2") messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello world"}, {"role": "assistant", "content": "Hi there, how can I help you today?"}, {"role": "user", "content": "Explain general relativity in details."} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) print(tokenizer.convert_ids_to_tokens(encodeds[0])) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '▁you', '▁today', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Ex', 'plain', '▁general', '▁rel', 'ativity', '▁in', '▁details', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>'] model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` #### Using vLLM ```python from vllm import LLM, SamplingParams TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" # There is no \n between </s> and <|im_start|>. def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None): # conversations: list of dict with key `role` and `content` (openai format) if conversations[0]['role'] != 'system' and system_prompt is not None: conversations = [{"role": "system", "content": system_prompt}] + conversations text = '' for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) text += prompt if add_assistant_prefix: prompt = TURN_PREFIX.format(role='assistant') text += prompt return text sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['</s>', '<|im_start|>']) llm = LLM("SeaLLMs/SeaLLM-7B-v2", dtype="bfloat16") message = "Explain general relativity in details." prompt = seallm_chat_convo_format(message, True) gen = llm.generate(prompt, sampling_params) print(gen[0].outputs[0].text) ``` #### Fine-tuning SeaLLM-7B-v2 Should follow the chat format and accurately mask out source tokens. Here is an example. ```python conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] def seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False): """ Inputs: conversations: list of dict following openai format, eg conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] add_assistant_prefix: whether to add assistant_prefix, only for inference decoding Outputs: tokenize_output_sample, { "input_ids": ... "token_type_ids": 1 if train and 0 if masked out (not train) } During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations. labels = sample['input_ids'].clone() labels[sample['token_type_ids'] == 0] = -100 """ TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" sample = None assistant_prefix_len = None for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) turn_sample = tokenizer( prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) if turn['role'] == 'assistant': if assistant_prefix_len is None: assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False)) turn_sample['token_type_ids'][assistant_prefix_len:] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len) if sample is None: sample = turn_sample else: for k in turn_sample.keys(): sample[k].extend(turn_sample[k]) if add_assistant_prefix: assistant_prefix_sample = tokenizer( TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) for k in sample.keys(): sample[k].extend(assistant_prefix_sample[k]) if tokenizer.add_bos_token: sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids'] sample['attention_mask'] = [1] + sample['attention_mask'] sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids'] return sample # ! testing sample = seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations) print(tokenizer.convert_ids_to_tokens(sample['input_ids'])) print(sample['token_type_ids']) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁hel', 'ful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Tell', '▁me', '▁a', '▁joke', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Why', '▁don', "'", 't', '▁scientists', '▁trust', '▁atoms', '?', '▁Because', '▁they', '▁make', '▁up', '▁everything', '.', '</s>'] # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ## Acknowledgement to Our Linguists We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety. ## Citation If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected]) **Author list and order will change!** * `*` and `^` are equal contributions. ``` @article{damonlpsg2023seallm, author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang, Chaoqun Liu, Hang Zhang, Lidong Bing}, title = {SeaLLMs - Large Language Models for Southeast Asia}, year = 2023, Eprint = {arXiv:2312.00738}, } ```
{"base_model": "SeaLLMs/SeaLLM-7B-v2", "license": "other", "license_name": "seallms", "license_link": "https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "tags": ["mistral", "multilingual", "sea", "conversational"]}
dataset
null
563
louisbrulenaudet/lemone-router-l
louisbrulenaudet
text-classification
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "sentence-transformers", "feature-extraction", "legal", "taxation", "fiscalité", "tax", "fr", "dataset:louisbrulenaudet/code-impots", "dataset:louisbrulenaudet/code-impots-annexe-iv", "dataset:louisbrulenaudet/code-impots-annexe-iii", "dataset:louisbrulenaudet/code-impots-annexe-i", "dataset:louisbrulenaudet/code-impots-annexe-ii", "dataset:louisbrulenaudet/livre-procedures-fiscales", "dataset:louisbrulenaudet/bofip", "base_model:intfloat/multilingual-e5-base", "base_model:finetune:intfloat/multilingual-e5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-10-23T01:47:00Z
2024-10-27T22:43:07+00:00
40
0
--- base_model: intfloat/multilingual-e5-base datasets: - louisbrulenaudet/code-impots - louisbrulenaudet/code-impots-annexe-iv - louisbrulenaudet/code-impots-annexe-iii - louisbrulenaudet/code-impots-annexe-i - louisbrulenaudet/code-impots-annexe-ii - louisbrulenaudet/livre-procedures-fiscales - louisbrulenaudet/bofip language: - fr library_name: transformers license: apache-2.0 metrics: - accuracy pipeline_tag: text-classification tags: - generated_from_trainer - sentence-transformers - text-classification - feature-extraction - legal - taxation - fiscalité - tax widget: - text: Quelles sont les modalités d'adoption d'un plan d'apurement échelonné par la commission chargée du recouvrement, et quelles sont les conditions qui s'imposent aux administrations et organismes chargés du recouvrement ainsi qu'au débiteur qui s'engage à le respecter ? example_title: Contrôle et contentieux - text: Quel régime fiscal est applicable aux opérations de crédit-bail portant sur des fonds de commerce, des fonds artisanaux, ou l'un de leurs éléments incorporels non amortissables, et quelles sont les conditions dans lesquelles les sommes correspondant à la quote-part de loyer ne constituent pas un élément du bénéfice imposable du bailleur et ne sont pas déductibles pour la détermination des résultats imposables du locataire ? example_title: Bénéfices professionnels - text: La succession s'ouvre par le décès dude cujus(code civil, art. 720). C'est donc le décès qui constitue le fait générateur de l'impôt. Dès lors, le tarif du droit et les règles applicables à sa liquidation sont celles en vigueur au jour du décès (en ce sens, Cass. com 7 janvier 1997 n° de pourvoi 95-11686). Toutefois, pour les legs sous condition suspensive (BOI-ENR-DMTG-10-10-10-10), les droits sont dus lors de la réalisation de la condition, d'après le régime fiscal applicable et la valeur des biens à cette époque (code général des impôts (CGI), art 676). Par ailleurs, pour les pénalités éventuellement exigibles, la loi applicable est celle en vigueur lors de la contravention. L'administration prouve le décès, en vue de la réclamation des droits, au moyen des registres de l'état civil dont les maires sont tenus de lui remettre un relevé trimestriel (LPF, art. L. 102 A). Elle peut aussi prouver la mutation par décès au moyen des présomptions légales de l'article 1881 du CGI et de l'article 1882 du CGI. Dans ce cas le fait générateur se place à la date à partir de laquelle la prise de possession est établie. example_title: Patrimoine et enregistrement - text: Quelles sont les obligations déclaratives que les associés personnes physiques doivent respecter pour bénéficier de la réduction d'impôt accordée au titre des dépenses de restauration immobilière effectuées dans les sites patrimoniaux remarquables et les quartiers relevant de la politique de la ville, et quelles sont les pièces justificatives qui doivent être jointes à leur déclaration des revenus ? example_title: Revenus particuliers model-index: - name: lemone-router results: [] --- <img src="assets/thumbnail.webp"> # Lemone-Router: A Series of Fine-Tuned Classification Models for French Taxation Lemone-router is a series of classification models designed to produce an optimal multi-agent system for different branches of tax law. Trained on a base of 49k lines comprising a set of synthetic questions generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation and authority documents, these models are based on an 8-category decomposition of the classification scheme derived from the Bulletin officiel des finances publiques - impôts : ```python label2id = { "Bénéfices professionnels": 0, "Contrôle et contentieux": 1, "Dispositifs transversaux": 2, "Fiscalité des entreprises": 3, "Patrimoine et enregistrement": 4, "Revenus particuliers": 5, "Revenus patrimoniaux": 6, "Taxes sur la consommation": 7 } id2label = { 0: "Bénéfices professionnels", 1: "Contrôle et contentieux", 2: "Dispositifs transversaux", 3: "Fiscalité des entreprises", 4: "Patrimoine et enregistrement", 5: "Revenus particuliers", 6: "Revenus patrimoniaux", 7: "Taxes sur la consommation" } ``` This model is a fine-tuned version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large). It achieves the following results on the evaluation set: - Loss: 0.4734 - Accuracy: 0.9191 ### Usage ```python # Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("louisbrulenaudet/lemone-router-l") model = AutoModelForSequenceClassification.from_pretrained("louisbrulenaudet/lemone-router-l") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.6763799752474963e-05 - train_batch_size: 4 - eval_batch_size: 64 - seed: 25 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6402 | 1.0 | 11233 | 0.6569 | 0.8630 | | 0.5031 | 2.0 | 22466 | 0.5058 | 0.9025 | | 0.2196 | 3.0 | 33699 | 0.4734 | 0.9191 | ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA H100 NVL - **CPU Model**: AMD EPYC 9V84 96-Core Processor ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 2.21.0 - Tokenizers 0.20.1 ## Citation If you use this code in your research, please use the following BibTeX entry. ```BibTeX @misc{louisbrulenaudet2024, author = {Louis Brulé Naudet}, title = {Lemone-Router: A Series of Fine-Tuned Classification Models for French Taxation}, year = {2024} howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/lemone-router-l}}, } ``` ## Feedback If you have any feedback, please reach out at [[email protected]](mailto:[email protected]).
[ "CAS" ]
Non_BioNLP
<img src="assets/thumbnail.webp"> # Lemone-Router: A Series of Fine-Tuned Classification Models for French Taxation Lemone-router is a series of classification models designed to produce an optimal multi-agent system for different branches of tax law. Trained on a base of 49k lines comprising a set of synthetic questions generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation and authority documents, these models are based on an 8-category decomposition of the classification scheme derived from the Bulletin officiel des finances publiques - impôts : ```python label2id = { "Bénéfices professionnels": 0, "Contrôle et contentieux": 1, "Dispositifs transversaux": 2, "Fiscalité des entreprises": 3, "Patrimoine et enregistrement": 4, "Revenus particuliers": 5, "Revenus patrimoniaux": 6, "Taxes sur la consommation": 7 } id2label = { 0: "Bénéfices professionnels", 1: "Contrôle et contentieux", 2: "Dispositifs transversaux", 3: "Fiscalité des entreprises", 4: "Patrimoine et enregistrement", 5: "Revenus particuliers", 6: "Revenus patrimoniaux", 7: "Taxes sur la consommation" } ``` This model is a fine-tuned version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large). It achieves the following results on the evaluation set: - Loss: 0.4734 - Accuracy: 0.9191 ### Usage ```python # Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("louisbrulenaudet/lemone-router-l") model = AutoModelForSequenceClassification.from_pretrained("louisbrulenaudet/lemone-router-l") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.6763799752474963e-05 - train_batch_size: 4 - eval_batch_size: 64 - seed: 25 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6402 | 1.0 | 11233 | 0.6569 | 0.8630 | | 0.5031 | 2.0 | 22466 | 0.5058 | 0.9025 | | 0.2196 | 3.0 | 33699 | 0.4734 | 0.9191 | ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA H100 NVL - **CPU Model**: AMD EPYC 9V84 96-Core Processor ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 2.21.0 - Tokenizers 0.20.1 ## Citation If you use this code in your research, please use the following BibTeX entry. ```BibTeX @misc{louisbrulenaudet2024, author = {Louis Brulé Naudet}, title = {Lemone-Router: A Series of Fine-Tuned Classification Models for French Taxation}, year = {2024} howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/lemone-router-l}}, } ``` ## Feedback If you have any feedback, please reach out at [[email protected]](mailto:[email protected]).
{"base_model": "intfloat/multilingual-e5-base", "datasets": ["louisbrulenaudet/code-impots", "louisbrulenaudet/code-impots-annexe-iv", "louisbrulenaudet/code-impots-annexe-iii", "louisbrulenaudet/code-impots-annexe-i", "louisbrulenaudet/code-impots-annexe-ii", "louisbrulenaudet/livre-procedures-fiscales", "louisbrulenaudet/bofip"], "language": ["fr"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["generated_from_trainer", "sentence-transformers", "text-classification", "feature-extraction", "legal", "taxation", "fiscalité", "tax"], "widget": [{"text": "Quelles sont les modalités d'adoption d'un plan d'apurement échelonné par la commission chargée du recouvrement, et quelles sont les conditions qui s'imposent aux administrations et organismes chargés du recouvrement ainsi qu'au débiteur qui s'engage à le respecter ?", "example_title": "Contrôle et contentieux"}, {"text": "Quel régime fiscal est applicable aux opérations de crédit-bail portant sur des fonds de commerce, des fonds artisanaux, ou l'un de leurs éléments incorporels non amortissables, et quelles sont les conditions dans lesquelles les sommes correspondant à la quote-part de loyer ne constituent pas un élément du bénéfice imposable du bailleur et ne sont pas déductibles pour la détermination des résultats imposables du locataire ?", "example_title": "Bénéfices professionnels"}, {"text": "La succession s'ouvre par le décès dude cujus(code civil, art. 720). C'est donc le décès qui constitue le fait générateur de l'impôt. Dès lors, le tarif du droit et les règles applicables à sa liquidation sont celles en vigueur au jour du décès (en ce sens, Cass. com 7 janvier 1997 n° de pourvoi 95-11686). Toutefois, pour les legs sous condition suspensive (BOI-ENR-DMTG-10-10-10-10), les droits sont dus lors de la réalisation de la condition, d'après le régime fiscal applicable et la valeur des biens à cette époque (code général des impôts (CGI), art 676). Par ailleurs, pour les pénalités éventuellement exigibles, la loi applicable est celle en vigueur lors de la contravention. L'administration prouve le décès, en vue de la réclamation des droits, au moyen des registres de l'état civil dont les maires sont tenus de lui remettre un relevé trimestriel (LPF, art. L. 102 A). Elle peut aussi prouver la mutation par décès au moyen des présomptions légales de l'article 1881 du CGI et de l'article 1882 du CGI. Dans ce cas le fait générateur se place à la date à partir de laquelle la prise de possession est établie.", "example_title": "Patrimoine et enregistrement"}, {"text": "Quelles sont les obligations déclaratives que les associés personnes physiques doivent respecter pour bénéficier de la réduction d'impôt accordée au titre des dépenses de restauration immobilière effectuées dans les sites patrimoniaux remarquables et les quartiers relevant de la politique de la ville, et quelles sont les pièces justificatives qui doivent être jointes à leur déclaration des revenus ?", "example_title": "Revenus particuliers"}], "model-index": [{"name": "lemone-router", "results": []}]}
dataset
null
564
frizai/Pulse-v1
frizai
null
[ "multi-modal", "all-in-one", "chatbot", "gpt-4", "gpt-3.5 turbo", "dall-e", "whisper", "meta-llama", "image", "3d", "audio", "en", "doi:10.57967/hf/2222", "license:apache-2.0", "region:us" ]
2024-03-02T09:03:33Z
2024-03-02T09:10:05+00:00
0
1
--- language: - en license: apache-2.0 tags: - multi-modal - all-in-one - chatbot - gpt-4 - gpt-3.5 turbo - dall-e - whisper - meta-llama - image - 3d - audio --- # Pulse AI PulseAI, an innovative product by FrizAI, stands at the forefront of auto-generative AI technology. Leveraging the power of quantum computing and advanced machine learning techniques, PulseAI specializes in creating diverse forms of digital content. From generating intricate text compositions to developing sophisticated web applications, PulseAI taps into the immense potential of image-based prompts to revolutionize content creation. ## Overview Pulse-AI is a dynamic Flask-based web application that integrates OpenAI's cutting-edge models. Aimed at delivering a seamless user experience, Pulse-AI provides an intuitive interface for both novices and experts to explore the capabilities of AI in image and text generation, as well as code development. ## Core Features - **Image Generation**: Utilize OpenAI's state-of-the-art Image API to transform ideas into vivid visual representations. - **Text Generation**: Harness the power of advanced language models for creating compelling and contextually relevant textual content. - **Code Generation**: Leverage AI to generate efficient and effective code snippets, enhancing productivity in software development. # Installation and Setup Embark on your journey with Pulse-AI by following these setup instructions: 1. **Clone the Repository** ```bash git clone https://github.com/Will-Langhart/Pulse-AI.git cd Pulse-AI ``` 2. **Initialize the Virtual Environment** ```bash python3 -m venv venv source venv/bin/activate ``` 3. **Dependency Installation** ```bash pip install -r requirements.txt ``` 4. **Configure Environment Variables** Create a `.env` file at the project root with your OpenAI API key: ```makefile OPENAI_API_KEY=<your_api_key_here> ``` 5. **Launch the Application** ```bash flask run ``` ## Virtual Environment ```bash deactivate # If you're currently in the virtual environment rm -rf venv python3 -m venv venv source venv/bin/activate pip install -r requirements.txt ``` # Package Downolads Dotnev ```bash pip install python-dotenv ``` OpenAI ```bash pip install openai ``` Flask-SQLAlchemy ```bash pip install flask-sqlalchemy ``` Verify Installation ```bash pip freeze ``` # GitHub Repository Changes and Updates 1. Fetch GitHub Repository Status ```bash git status ``` 2. Fetch the latest changes ```bash git fetch origin ``` 3. Merge changes into local branch ```bash git merge origin/main ``` 4. Merge head force ```bash git push origin HEAD:main ``` ## Usage Instructions Access the world of AI-powered content creation by navigating to `http://127.0.0.1:5000`. The platform offers: - **Image Generation**: Input your creative prompts and watch as AI brings them to life. - **Textual Content Creation**: Explore AI's ability to craft narratives, articles, and more. - **Code Synthesis**: Generate code snippets to accelerate your development process. ## Contribution Guidelines Join the Pulse-AI community and contribute to the evolution of AI-driven content creation: 1. Fork the repository and create a new feature branch. 2. Make your contributions and commit them with clear, descriptive messages. 3. Push your changes and initiate a pull request for review. ## License Pulse-AI is under the MIT License. For detailed information, refer to the `LICENSE` file. ## Contact and Further Information - **Contact**: [Your Name](mailto:[email protected]) - **Twitter**: [@YourTwitter](https://twitter.com/YourTwitter) - **Project Link**: [PulseAI on GitHub](https://github.com/Will-Langhart/Pulse-AI) --- PulseAI is a testament to FrizAI's commitment to advancing AI technology, making it accessible and transformative for various industries and creative endeavors.
[ "CRAFT" ]
Non_BioNLP
# Pulse AI PulseAI, an innovative product by FrizAI, stands at the forefront of auto-generative AI technology. Leveraging the power of quantum computing and advanced machine learning techniques, PulseAI specializes in creating diverse forms of digital content. From generating intricate text compositions to developing sophisticated web applications, PulseAI taps into the immense potential of image-based prompts to revolutionize content creation. ## Overview Pulse-AI is a dynamic Flask-based web application that integrates OpenAI's cutting-edge models. Aimed at delivering a seamless user experience, Pulse-AI provides an intuitive interface for both novices and experts to explore the capabilities of AI in image and text generation, as well as code development. ## Core Features - **Image Generation**: Utilize OpenAI's state-of-the-art Image API to transform ideas into vivid visual representations. - **Text Generation**: Harness the power of advanced language models for creating compelling and contextually relevant textual content. - **Code Generation**: Leverage AI to generate efficient and effective code snippets, enhancing productivity in software development. # Installation and Setup Embark on your journey with Pulse-AI by following these setup instructions: 1. **Clone the Repository** ```bash git clone https://github.com/Will-Langhart/Pulse-AI.git cd Pulse-AI ``` 2. **Initialize the Virtual Environment** ```bash python3 -m venv venv source venv/bin/activate ``` 3. **Dependency Installation** ```bash pip install -r requirements.txt ``` 4. **Configure Environment Variables** Create a `.env` file at the project root with your OpenAI API key: ```makefile OPENAI_API_KEY=<your_api_key_here> ``` 5. **Launch the Application** ```bash flask run ``` ## Virtual Environment ```bash deactivate # If you're currently in the virtual environment rm -rf venv python3 -m venv venv source venv/bin/activate pip install -r requirements.txt ``` # Package Downolads Dotnev ```bash pip install python-dotenv ``` OpenAI ```bash pip install openai ``` Flask-SQLAlchemy ```bash pip install flask-sqlalchemy ``` Verify Installation ```bash pip freeze ``` # GitHub Repository Changes and Updates 1. Fetch GitHub Repository Status ```bash git status ``` 2. Fetch the latest changes ```bash git fetch origin ``` 3. Merge changes into local branch ```bash git merge origin/main ``` 4. Merge head force ```bash git push origin HEAD:main ``` ## Usage Instructions Access the world of AI-powered content creation by navigating to `http://127.0.0.1:5000`. The platform offers: - **Image Generation**: Input your creative prompts and watch as AI brings them to life. - **Textual Content Creation**: Explore AI's ability to craft narratives, articles, and more. - **Code Synthesis**: Generate code snippets to accelerate your development process. ## Contribution Guidelines Join the Pulse-AI community and contribute to the evolution of AI-driven content creation: 1. Fork the repository and create a new feature branch. 2. Make your contributions and commit them with clear, descriptive messages. 3. Push your changes and initiate a pull request for review. ## License Pulse-AI is under the MIT License. For detailed information, refer to the `LICENSE` file. ## Contact and Further Information - **Contact**: [Your Name](mailto:[email protected]) - **Twitter**: [@YourTwitter](https://twitter.com/YourTwitter) - **Project Link**: [PulseAI on GitHub](https://github.com/Will-Langhart/Pulse-AI) --- PulseAI is a testament to FrizAI's commitment to advancing AI technology, making it accessible and transformative for various industries and creative endeavors.
{"language": ["en"], "license": "apache-2.0", "tags": ["multi-modal", "all-in-one", "chatbot", "gpt-4", "gpt-3.5 turbo", "dall-e", "whisper", "meta-llama", "image", "3d", "audio"]}
dataset
null
565
research-backup/mbart-large-cc25-frquad-qa
research-backup
text2text-generation
[ "transformers", "pytorch", "mbart", "text2text-generation", "question answering", "fr", "dataset:lmqg/qg_frquad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-25T19:28:37Z
2023-04-30T09:36:55+00:00
12
0
--- datasets: - lmqg/qg_frquad language: fr license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore pipeline_tag: text2text-generation tags: - question answering widget: - text: 'question: En quelle année a-t-on trouvé trace d''un haut fourneau similaire?, context: Cette technologie ne disparaît qu''au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L''expérience n''est un échec technique que dans les régions où le savoir-faire n''existe pas, ou a disparu.' example_title: Question Answering Example 1 - text: 'question: Comment appelle-t-on la Guerre de 14-18 ?, context: Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l''Amirauté après la catastrophe des Dardanelles, lors de la Grande Guerre de 14-18, ou son rejet par l''électorat en juillet 1945. On sait également que dans ces deux cas, la guérison, certes lente et douloureuse et jamais complète ni définitive, se fera grâce à la peinture. D''un autre côté, étant donnés les symptômes de ce mal que Churchill éprouvait de plus en plus, il ne pouvait rien moins qu''être purement associé à de telles causes extrinsèques, ce qui correspond au profil classique de la dépression majeure unipolaire ou bipolaire.' example_title: Question Answering Example 2 model-index: - name: lmqg/mbart-large-cc25-frquad-qa results: - task: type: text2text-generation name: Text2text Generation dataset: name: lmqg/qg_frquad type: default args: default metrics: - type: bleu4_question_answering value: 26.33 name: BLEU4 (Question Answering) - type: rouge_l_question_answering value: 38.14 name: ROUGE-L (Question Answering) - type: meteor_question_answering value: 31.8 name: METEOR (Question Answering) - type: bertscore_question_answering value: 92.2 name: BERTScore (Question Answering) - type: moverscore_question_answering value: 77.16 name: MoverScore (Question Answering) - type: answer_f1_score__question_answering value: 60.48 name: AnswerF1Score (Question Answering) - type: answer_exact_match_question_answering value: 39.34 name: AnswerExactMatch (Question Answering) --- # Model Card of `lmqg/mbart-large-cc25-frquad-qa` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question answering task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** fr - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="fr", model="lmqg/mbart-large-cc25-frquad-qa") # model prediction answers = model.answer_q(list_question="En quelle année a-t-on trouvé trace d'un haut fourneau similaire?", list_context=" Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-frquad-qa") output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.") ``` ## Evaluation - ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 39.34 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | AnswerF1Score | 60.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | BERTScore | 92.2 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 37.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 32.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 29.23 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 26.33 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 31.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 77.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 38.14 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_frquad - dataset_name: default - input_types: ['paragraph_question'] - output_types: ['answer'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 15 - batch: 32 - lr: 0.0002 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
[ "CAS" ]
Non_BioNLP
# Model Card of `lmqg/mbart-large-cc25-frquad-qa` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question answering task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** fr - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="fr", model="lmqg/mbart-large-cc25-frquad-qa") # model prediction answers = model.answer_q(list_question="En quelle année a-t-on trouvé trace d'un haut fourneau similaire?", list_context=" Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-frquad-qa") output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.") ``` ## Evaluation - ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 39.34 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | AnswerF1Score | 60.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | BERTScore | 92.2 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 37.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 32.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 29.23 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 26.33 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 31.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 77.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 38.14 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_frquad - dataset_name: default - input_types: ['paragraph_question'] - output_types: ['answer'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 15 - batch: 32 - lr: 0.0002 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
{"datasets": ["lmqg/qg_frquad"], "language": "fr", "license": "cc-by-4.0", "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "tags": ["question answering"], "widget": [{"text": "question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.", "example_title": "Question Answering Example 1"}, {"text": "question: Comment appelle-t-on la Guerre de 14-18 ?, context: Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la Grande Guerre de 14-18, ou son rejet par l'électorat en juillet 1945. On sait également que dans ces deux cas, la guérison, certes lente et douloureuse et jamais complète ni définitive, se fera grâce à la peinture. D'un autre côté, étant donnés les symptômes de ce mal que Churchill éprouvait de plus en plus, il ne pouvait rien moins qu'être purement associé à de telles causes extrinsèques, ce qui correspond au profil classique de la dépression majeure unipolaire ou bipolaire.", "example_title": "Question Answering Example 2"}], "model-index": [{"name": "lmqg/mbart-large-cc25-frquad-qa", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_frquad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_answering", "value": 26.33, "name": "BLEU4 (Question Answering)"}, {"type": "rouge_l_question_answering", "value": 38.14, "name": "ROUGE-L (Question Answering)"}, {"type": "meteor_question_answering", "value": 31.8, "name": "METEOR (Question Answering)"}, {"type": "bertscore_question_answering", "value": 92.2, "name": "BERTScore (Question Answering)"}, {"type": "moverscore_question_answering", "value": 77.16, "name": "MoverScore (Question Answering)"}, {"type": "answer_f1_score__question_answering", "value": 60.48, "name": "AnswerF1Score (Question Answering)"}, {"type": "answer_exact_match_question_answering", "value": 39.34, "name": "AnswerExactMatch (Question Answering)"}]}]}]}
dataset
null
566
usvsnsp/pythia-2.8b-sft-hh
usvsnsp
null
[ "region:us" ]
2023-08-24T08:57:05Z
2023-08-28T17:44:18+00:00
0
0
--- {} --- Wandb Run: https://wandb.ai/eleutherai/pythia-rlhf/runs/kj29wswk Eval Results: | Task |Version|Filter| Metric |Value | |Stderr| |--------------|-------|------|----------|-----:|---|-----:| |arc_challenge |Yaml |none |acc |0.2995|± |0.0134| | | |none |acc_norm |0.3251|± |0.0137| |arc_easy |Yaml |none |acc |0.6486|± |0.0098| | | |none |acc_norm |0.5673|± |0.0102| |lambada_openai|Yaml |none |perplexity|4.7801|± |0.1197| | | |none |acc |0.6412|± |0.0067| |logiqa |Yaml |none |acc |0.2120|± |0.0160| | | |none |acc_norm |0.2873|± |0.0177| |piqa |Yaml |none |acc |0.7524|± |0.0101| | | |none |acc_norm |0.7530|± |0.0101| |sciq |Yaml |none |acc |0.8820|± |0.0102| | | |none |acc_norm |0.8160|± |0.0123| |winogrande |Yaml |none |acc |0.6077|± |0.0137| |wsc |Yaml |none |acc |0.3654|± |0.0474|
[ "SCIQ" ]
Non_BioNLP
Wandb Run: https://wandb.ai/eleutherai/pythia-rlhf/runs/kj29wswk Eval Results: | Task |Version|Filter| Metric |Value | |Stderr| |--------------|-------|------|----------|-----:|---|-----:| |arc_challenge |Yaml |none |acc |0.2995|± |0.0134| | | |none |acc_norm |0.3251|± |0.0137| |arc_easy |Yaml |none |acc |0.6486|± |0.0098| | | |none |acc_norm |0.5673|± |0.0102| |lambada_openai|Yaml |none |perplexity|4.7801|± |0.1197| | | |none |acc |0.6412|± |0.0067| |logiqa |Yaml |none |acc |0.2120|± |0.0160| | | |none |acc_norm |0.2873|± |0.0177| |piqa |Yaml |none |acc |0.7524|± |0.0101| | | |none |acc_norm |0.7530|± |0.0101| |sciq |Yaml |none |acc |0.8820|± |0.0102| | | |none |acc_norm |0.8160|± |0.0123| |winogrande |Yaml |none |acc |0.6077|± |0.0137| |wsc |Yaml |none |acc |0.3654|± |0.0474|
{}
dataset
null
567
adipanda/luffy-simpletuner-lora-8
adipanda
text-to-image
[ "diffusers", "flux", "flux-diffusers", "text-to-image", "simpletuner", "safe-for-work", "lora", "template:sd-lora", "lycoris", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
2024-10-04T01:53:00Z
2024-10-07T16:07:38+00:00
38
0
--- base_model: black-forest-labs/FLUX.1-dev license: other tags: - flux - flux-diffusers - text-to-image - diffusers - simpletuner - safe-for-work - lora - template:sd-lora - lycoris inference: true widget: - text: unconditional (blank prompt) parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_0_0.png - text: A scene from One Piece. Monkey D. Luffy holding a sign that says 'I LOVE PROMPTS!', he is standing full body on a beach at sunset. He is wearing a red vest, yellow sash, and a straw hat. The setting sun casts a dynamic shadow on his face. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_1_0.png - text: A scene from One Piece. Monkey D. Luffy jumping out of a propeller airplane, sky diving. He looks excited and his hair is blowing in the wind. The sky is clear and blue, there are birds pictured in the distance. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_2_0.png - text: 'A scene from One Piece. Monkey D. Luffy spinning a basketball on his finger on a basketball court. He is wearing a lakers jersey with the #12 on it. The basketball hoop and crowd are in the background cheering him. He is smiling.' parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_3_0.png - text: A scene from One Piece. Monkey D. Luffy is wearing a suit in an office shaking the hand of a business woman. The woman has purple hair and is wearing professional attire. There is a Google logo in the background. It is during daytime, and the overall sentiment is one of accomplishment. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_4_0.png - text: A scene from One Piece. Monkey D. Luffy is fighting a large brown grizzly bear, deep in a forest. The bear is tall and standing on two legs, roaring. The bear is also wearing a crown because it is the king of all bears. Around them are tall trees and other animals watching. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_5_0.png --- # luffy-simpletuner-lora-8 This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). No validation prompt was used during training. None ## Validation settings - CFG: `3.5` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `None` - Seed: `42` - Resolution: `1024x1024` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: <Gallery /> The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 31 - Training steps: 19500 - Learning rate: 5e-05 - Effective batch size: 8 - Micro-batch size: 8 - Gradient accumulation steps: 1 - Number of GPUs: 1 - Prediction type: flow-matching - Rescaled betas zero SNR: False - Optimizer: adamw_bf16 - Precision: Pure BF16 - Quantised: Yes: int8-quanto - Xformers: Not used - LyCORIS Config: ```json { "algo": "lokr", "multiplier": 1.0, "linear_dim": 10000, "linear_alpha": 1, "factor": 12, "apply_preset": { "target_module": [ "Attention", "FeedForward" ], "module_algo_map": { "Attention": { "factor": 12 }, "FeedForward": { "factor": 6 } } } } ``` ## Datasets ### luffy-1024-crop - Repeats: 1 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### luffy-1024 - Repeats: 1 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### luffy-768-crop - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.589824 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### luffy-768 - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.589824 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### luffy-512-crop - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### luffy-512 - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: False - Crop style: None - Crop aspect: None ## Inference ```python import torch from diffusers import DiffusionPipeline from lycoris import create_lycoris_from_weights model_id = 'black-forest-labs/FLUX.1-dev' adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually lora_scale = 1.0 wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer) wrapper.merge_to() prompt = "An astronaut is riding a horse through the jungles of Thailand." pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, num_inference_steps=20, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1024, height=1024, guidance_scale=3.5, ).images[0] image.save("output.png", format="PNG") ```
[ "BEAR" ]
Non_BioNLP
# luffy-simpletuner-lora-8 This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). No validation prompt was used during training. None ## Validation settings - CFG: `3.5` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `None` - Seed: `42` - Resolution: `1024x1024` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: <Gallery /> The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 31 - Training steps: 19500 - Learning rate: 5e-05 - Effective batch size: 8 - Micro-batch size: 8 - Gradient accumulation steps: 1 - Number of GPUs: 1 - Prediction type: flow-matching - Rescaled betas zero SNR: False - Optimizer: adamw_bf16 - Precision: Pure BF16 - Quantised: Yes: int8-quanto - Xformers: Not used - LyCORIS Config: ```json { "algo": "lokr", "multiplier": 1.0, "linear_dim": 10000, "linear_alpha": 1, "factor": 12, "apply_preset": { "target_module": [ "Attention", "FeedForward" ], "module_algo_map": { "Attention": { "factor": 12 }, "FeedForward": { "factor": 6 } } } } ``` ## Datasets ### luffy-1024-crop - Repeats: 1 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### luffy-1024 - Repeats: 1 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### luffy-768-crop - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.589824 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### luffy-768 - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.589824 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### luffy-512-crop - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### luffy-512 - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: False - Crop style: None - Crop aspect: None ## Inference ```python import torch from diffusers import DiffusionPipeline from lycoris import create_lycoris_from_weights model_id = 'black-forest-labs/FLUX.1-dev' adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually lora_scale = 1.0 wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer) wrapper.merge_to() prompt = "An astronaut is riding a horse through the jungles of Thailand." pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, num_inference_steps=20, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1024, height=1024, guidance_scale=3.5, ).images[0] image.save("output.png", format="PNG") ```
{"base_model": "black-forest-labs/FLUX.1-dev", "license": "other", "tags": ["flux", "flux-diffusers", "text-to-image", "diffusers", "simpletuner", "safe-for-work", "lora", "template:sd-lora", "lycoris"], "inference": true, "widget": [{"text": "unconditional (blank prompt)", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_0_0.png"}}, {"text": "A scene from One Piece. Monkey D. Luffy holding a sign that says 'I LOVE PROMPTS!', he is standing full body on a beach at sunset. He is wearing a red vest, yellow sash, and a straw hat. The setting sun casts a dynamic shadow on his face.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_1_0.png"}}, {"text": "A scene from One Piece. Monkey D. Luffy jumping out of a propeller airplane, sky diving. He looks excited and his hair is blowing in the wind. The sky is clear and blue, there are birds pictured in the distance.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_2_0.png"}}, {"text": "A scene from One Piece. Monkey D. Luffy spinning a basketball on his finger on a basketball court. He is wearing a lakers jersey with the #12 on it. The basketball hoop and crowd are in the background cheering him. He is smiling.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_3_0.png"}}, {"text": "A scene from One Piece. Monkey D. Luffy is wearing a suit in an office shaking the hand of a business woman. The woman has purple hair and is wearing professional attire. There is a Google logo in the background. It is during daytime, and the overall sentiment is one of accomplishment.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_4_0.png"}}, {"text": "A scene from One Piece. Monkey D. Luffy is fighting a large brown grizzly bear, deep in a forest. The bear is tall and standing on two legs, roaring. The bear is also wearing a crown because it is the king of all bears. Around them are tall trees and other animals watching.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_5_0.png"}}]}
dataset
null
568
Kylef94/bear_classifier
Kylef94
image-classification
[ "fastai", "gradio", "vision", "image-classification", "license:apache-2.0", "region:us" ]
2023-05-11T12:47:46Z
2023-05-11T16:48:56+00:00
0
0
--- library_name: fastai license: apache-2.0 metrics: - error rate tags: - fastai - gradio - vision - image-classification finetuned From Model: Resnet18 --- # Bear-classifier A bear classifier trained using fast.ai
[ "BEAR" ]
Non_BioNLP
# Bear-classifier A bear classifier trained using fast.ai
{"library_name": "fastai", "license": "apache-2.0", "metrics": ["error rate"], "tags": ["fastai", "gradio", "vision", "image-classification"], "finetuned From Model": "Resnet18"}
dataset
null
569
RichardErkhov/EleutherAI_-_pythia-1b-deduped-4bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2024-04-23T07:56:06Z
2024-04-23T07:57:19+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) pythia-1b-deduped - bnb 4bits - Model creator: https://huggingface.co/EleutherAI/ - Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped/ Original model description: --- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-1B-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-1B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-1B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-1B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-1B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-1B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
[ "SCIQ" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) pythia-1b-deduped - bnb 4bits - Model creator: https://huggingface.co/EleutherAI/ - Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped/ Original model description: --- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-1B-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-1B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-1B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-1B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-1B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-1B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
{}
dataset
null
570
ManoloPueblo/LLM_MERGE_CC3
ManoloPueblo
null
[ "safetensors", "mistral", "merge", "mergekit", "lazymergekit", "llm-merge-cc3", "mistral-7b", "mistral-ft-optimized", "neural-hermes", "mistralai/Mistral-7B-v0.1", "samir-fama/SamirGPT-v1", "abacusai/Slerp-CM-mist-dpo", "EmbeddedLLM/Mistral-7B-Merge-14-v0.2", "license:apache-2.0", "region:us" ]
2024-11-10T13:05:42Z
2024-11-10T13:15:03+00:00
7
1
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - llm-merge-cc3 - mistral-7b - mistral-ft-optimized - neural-hermes - mistralai/Mistral-7B-v0.1 - samir-fama/SamirGPT-v1 - abacusai/Slerp-CM-mist-dpo - EmbeddedLLM/Mistral-7B-Merge-14-v0.2 --- # LLM_MERGE_CC3 LLM_MERGE_CC3 est une fusion des modèles suivants créée par ManoloPueblo utilisant [mergekit](https://github.com/cg123/mergekit): * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1) * [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo) * [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2) ## 🧩 Configuration de la fusion ```yaml merge_method: dare base_model: mistralai/Mistral-7B-v0.1 models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: samir-fama/SamirGPT-v1 parameters: density: 0.53 weight: 0.4 - model: abacusai/Slerp-CM-mist-dpo parameters: density: 0.53 weight: 0.3 - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2 parameters: density: 0.53 weight: 0.3 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` ## Description LLM_MERGE_CC3 est un modèle de langage créé par la fusion de trois variantes Mistral : 1. Mistral-7B-v0.1 - Le modèle de base Mistral (modèle de référence) 2. mistral-ft-optimized-1218 - Version optimisée par OpenPipe (poids: 0.5, densité: 0.5) 3. NeuralHermes-2.5-Mistral-7B - Version améliorée par MLabonne (poids: 0.3, densité: 0.5) Cette fusion utilise la méthode "dare" avec normalisation et une précision float16 pour combiner les forces des trois modèles. ## Architecture Le modèle conserve l'architecture de base de Mistral-7B tout en incorporant les améliorations des trois versions à travers une fusion pondérée. La méthode "ties" permet une fusion plus sophistiquée des poids des modèles. ## Paramètres de fusion - Méthode de fusion : dare - Normalisation : activée - Type de données : float16 - Densités et poids : * OpenPipe/mistral-ft-optimized-1218 : densité 0.5, poids 0.5 * NeuralHermes-2.5-Mistral-7B : densité 0.5, poids 0.3 ## Utilisation Ce modèle peut être utilisé avec la bibliothèque transformers de Hugging Face : ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ManoloPueblo/LLM_MERGE_CC3") model = AutoModelForCausalLM.from_pretrained("ManoloPueblo/LLM_MERGE_CC3") ``` ## Modèles fusionnés 1. [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Modèle de base 2. [mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) - Version optimisée 3. [NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) - Version améliorée ## Limitations Comme pour tout modèle de langage, les utilisateurs doivent être conscients des biais potentiels et des limitations inhérentes aux modèles sources. Les performances peuvent varier selon les cas d'utilisation.
[ "CAS" ]
Non_BioNLP
# LLM_MERGE_CC3 LLM_MERGE_CC3 est une fusion des modèles suivants créée par ManoloPueblo utilisant [mergekit](https://github.com/cg123/mergekit): * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1) * [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo) * [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2) ## 🧩 Configuration de la fusion ```yaml merge_method: dare base_model: mistralai/Mistral-7B-v0.1 models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: samir-fama/SamirGPT-v1 parameters: density: 0.53 weight: 0.4 - model: abacusai/Slerp-CM-mist-dpo parameters: density: 0.53 weight: 0.3 - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2 parameters: density: 0.53 weight: 0.3 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` ## Description LLM_MERGE_CC3 est un modèle de langage créé par la fusion de trois variantes Mistral : 1. Mistral-7B-v0.1 - Le modèle de base Mistral (modèle de référence) 2. mistral-ft-optimized-1218 - Version optimisée par OpenPipe (poids: 0.5, densité: 0.5) 3. NeuralHermes-2.5-Mistral-7B - Version améliorée par MLabonne (poids: 0.3, densité: 0.5) Cette fusion utilise la méthode "dare" avec normalisation et une précision float16 pour combiner les forces des trois modèles. ## Architecture Le modèle conserve l'architecture de base de Mistral-7B tout en incorporant les améliorations des trois versions à travers une fusion pondérée. La méthode "ties" permet une fusion plus sophistiquée des poids des modèles. ## Paramètres de fusion - Méthode de fusion : dare - Normalisation : activée - Type de données : float16 - Densités et poids : * OpenPipe/mistral-ft-optimized-1218 : densité 0.5, poids 0.5 * NeuralHermes-2.5-Mistral-7B : densité 0.5, poids 0.3 ## Utilisation Ce modèle peut être utilisé avec la bibliothèque transformers de Hugging Face : ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ManoloPueblo/LLM_MERGE_CC3") model = AutoModelForCausalLM.from_pretrained("ManoloPueblo/LLM_MERGE_CC3") ``` ## Modèles fusionnés 1. [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Modèle de base 2. [mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) - Version optimisée 3. [NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) - Version améliorée ## Limitations Comme pour tout modèle de langage, les utilisateurs doivent être conscients des biais potentiels et des limitations inhérentes aux modèles sources. Les performances peuvent varier selon les cas d'utilisation.
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "llm-merge-cc3", "mistral-7b", "mistral-ft-optimized", "neural-hermes", "mistralai/Mistral-7B-v0.1", "samir-fama/SamirGPT-v1", "abacusai/Slerp-CM-mist-dpo", "EmbeddedLLM/Mistral-7B-Merge-14-v0.2"]}
dataset
null
571
QuantFactory/Einstein-v6.1-Llama3-8B-GGUF
QuantFactory
null
[ "gguf", "axolotl", "generated_from_trainer", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama", "llama3", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:HuggingFaceH4/no_robots", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:WizardLM/WizardLM_evol_instruct_70k", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "model-index", "endpoints_compatible", "region:us", "conversational" ]
2024-05-05T15:29:11Z
2024-10-29T16:38:23+00:00
699
4
--- base_model: meta-llama/Meta-Llama-3-8B datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction - totally-not-an-llm/EverythingLM-data-V3 - HuggingFaceH4/no_robots - OpenAssistant/oasst_top1_2023-08-25 - WizardLM/WizardLM_evol_instruct_70k language: - en license: other tags: - axolotl - generated_from_trainer - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math - llama - llama3 model-index: - name: Einstein-v6.1-Llama3-8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.46 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.41 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.1 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 45.68 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 29.38 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 5.74 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 4.25 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 11.23 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 23.68 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B name: Open LLM Leaderboard --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Einstein-v6.1-Llama3-8B-GGUF This is quantized version of [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B) created using llama.cpp # Original Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/5s12oq859qLfDkkTNam_C.png) # 🔬 Einstein-v6.1-Llama3-8B This model is a full fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on diverse datasets. This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This model's training was sponsored by [sablo.ai](https://sablo.ai). <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false chat_template: chatml datasets: - path: data/merged_all.json ds_type: json type: alpaca conversation: chatml - path: data/gpteacher-instruct-special-alpaca.json ds_type: json type: gpteacher conversation: chatml - path: data/wizardlm_evol_instruct_70k_random_half.json ds_type: json type: alpaca conversation: chatml - path: data/capybara_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/synthia-v1.3_sharegpt_12500.json ds_type: json type: sharegpt conversation: chatml - path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/slimorca_dedup_filtered_95k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/pippa_bagel_repo_3k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/gpt4_data_lmys_1m_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/sharegpt_gpt4_english.json ds_type: json type: sharegpt conversation: chatml - path: data/no_robots_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/oasst_top1_from_fusechatmixture_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/everythinglm-data-v3_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml dataset_prepared_path: last_run_prepared val_set_size: 0.002 output_dir: ./Einstein-v6.1-Llama3-8B-model sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: Einstein wandb_entity: wandb_watch: wandb_name: Einstein-v6.1-Llama3-2-epoch wandb_log_model: hub_model_id: Weyaxi/Einstein-v6.1-Llama3-8B save_safetensors: true gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 2 optimizer: adamw_bnb_8bit # look lr_scheduler: cosine learning_rate: 0.000005 # look train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 2 eval_table_size: eval_table_max_new_tokens: 128 saves_per_epoch: 2 debug: deepspeed: zero3_bf16_cpuoffload_params.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "<|im_end|>" unk_token: "<unk>" pad_token: <|end_of_text|> # changed tokens: - "<|im_start|>" ``` </details><br> # 💬 Prompt Template You can use ChatML prompt template while using the model: ### ChatML ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant {asistant}<|im_end|> ``` This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are helpful AI asistant."}, {"role": "user", "content": "Hello!"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` # 📊 Datasets used in this model The datasets used to train this model are listed in the metadata section of the model card. Please note that certain datasets mentioned in the metadata may have undergone filtering based on various criteria. The results of this filtering process and its outcomes are in the data folder of this repository: [Weyaxi/Einstein-v6.1-Llama3-8B/data](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B/tree/main/data) # 🔄 Quantizationed versions ## GGUF [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF ## ExLlamaV2 [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2 ## AWQ [@solidrust](https://huggingface.co/solidrust) - https://huggingface.co/solidrust/Einstein-v6.1-Llama3-8B-AWQ # 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v6.1-Llama3-8B) | Metric |Value| |---------------------------------|----:| |Avg. |68.60| |AI2 Reasoning Challenge (25-Shot)|62.46| |HellaSwag (10-Shot) |82.41| |MMLU (5-Shot) |66.19| |TruthfulQA (0-shot) |55.10| |Winogrande (5-shot) |79.32| |GSM8k (5-shot) |66.11| # 🎯 [Open LLM Leaderboard v2 Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v6.1-Llama3-8B) | Metric |Value| |-------------------|----:| |Avg. |19.99| |IFEval (0-Shot) |45.68| |BBH (3-Shot) |29.38| |MATH Lvl 5 (4-Shot)| 5.74| |GPQA (0-shot) | 4.25| |MuSR (0-shot) |11.23| |MMLU-PRO (5-shot) |23.68| # 📚 Some resources, discussions and reviews aboout this model #### 🐦 Announcement tweet: - https://twitter.com/Weyaxi/status/1783050724659675627 #### 🔍 Reddit post in r/LocalLLaMA: - https://www.reddit.com/r/LocalLLaMA/comments/1cdlym1/introducing_einstein_v61_based_on_the_new_llama3/ #### ▶️ Youtube Video(s) - [Install Einstein v6.1 Llama3-8B Locally on Windows](https://www.youtube.com/watch?v=VePvv6OM0JY) #### 📱 Octopus-V4-3B - [Octopus-V4-3B](https://huggingface.co/NexaAIDev/Octopus-v4) leverages the incredible physics capabilities of [Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B) in their model. # 🤖 Additional information about training This model is full fine-tuned for 2 epoch. Total number of steps was 2026. <details><summary>Loss graph</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/Ycs7ZpoqmxFt0u9rybCO1.png) </details><br> # 🤝 Acknowledgments Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model. Thanks to all the dataset authors mentioned in the datasets section. Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model. Thanks to all open source AI community. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) If you would like to support me: [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
[ "SCIQ" ]
Non_BioNLP
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Einstein-v6.1-Llama3-8B-GGUF This is quantized version of [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B) created using llama.cpp # Original Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/5s12oq859qLfDkkTNam_C.png) # 🔬 Einstein-v6.1-Llama3-8B This model is a full fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on diverse datasets. This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This model's training was sponsored by [sablo.ai](https://sablo.ai). <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false chat_template: chatml datasets: - path: data/merged_all.json ds_type: json type: alpaca conversation: chatml - path: data/gpteacher-instruct-special-alpaca.json ds_type: json type: gpteacher conversation: chatml - path: data/wizardlm_evol_instruct_70k_random_half.json ds_type: json type: alpaca conversation: chatml - path: data/capybara_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/synthia-v1.3_sharegpt_12500.json ds_type: json type: sharegpt conversation: chatml - path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/slimorca_dedup_filtered_95k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/pippa_bagel_repo_3k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/gpt4_data_lmys_1m_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/sharegpt_gpt4_english.json ds_type: json type: sharegpt conversation: chatml - path: data/no_robots_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/oasst_top1_from_fusechatmixture_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/everythinglm-data-v3_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml dataset_prepared_path: last_run_prepared val_set_size: 0.002 output_dir: ./Einstein-v6.1-Llama3-8B-model sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: Einstein wandb_entity: wandb_watch: wandb_name: Einstein-v6.1-Llama3-2-epoch wandb_log_model: hub_model_id: Weyaxi/Einstein-v6.1-Llama3-8B save_safetensors: true gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 2 optimizer: adamw_bnb_8bit # look lr_scheduler: cosine learning_rate: 0.000005 # look train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 2 eval_table_size: eval_table_max_new_tokens: 128 saves_per_epoch: 2 debug: deepspeed: zero3_bf16_cpuoffload_params.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "<|im_end|>" unk_token: "<unk>" pad_token: <|end_of_text|> # changed tokens: - "<|im_start|>" ``` </details><br> # 💬 Prompt Template You can use ChatML prompt template while using the model: ### ChatML ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant {asistant}<|im_end|> ``` This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are helpful AI asistant."}, {"role": "user", "content": "Hello!"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` # 📊 Datasets used in this model The datasets used to train this model are listed in the metadata section of the model card. Please note that certain datasets mentioned in the metadata may have undergone filtering based on various criteria. The results of this filtering process and its outcomes are in the data folder of this repository: [Weyaxi/Einstein-v6.1-Llama3-8B/data](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B/tree/main/data) # 🔄 Quantizationed versions ## GGUF [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF ## ExLlamaV2 [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2 ## AWQ [@solidrust](https://huggingface.co/solidrust) - https://huggingface.co/solidrust/Einstein-v6.1-Llama3-8B-AWQ # 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v6.1-Llama3-8B) | Metric |Value| |---------------------------------|----:| |Avg. |68.60| |AI2 Reasoning Challenge (25-Shot)|62.46| |HellaSwag (10-Shot) |82.41| |MMLU (5-Shot) |66.19| |TruthfulQA (0-shot) |55.10| |Winogrande (5-shot) |79.32| |GSM8k (5-shot) |66.11| # 🎯 [Open LLM Leaderboard v2 Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v6.1-Llama3-8B) | Metric |Value| |-------------------|----:| |Avg. |19.99| |IFEval (0-Shot) |45.68| |BBH (3-Shot) |29.38| |MATH Lvl 5 (4-Shot)| 5.74| |GPQA (0-shot) | 4.25| |MuSR (0-shot) |11.23| |MMLU-PRO (5-shot) |23.68| # 📚 Some resources, discussions and reviews aboout this model #### 🐦 Announcement tweet: - https://twitter.com/Weyaxi/status/1783050724659675627 #### 🔍 Reddit post in r/LocalLLaMA: - https://www.reddit.com/r/LocalLLaMA/comments/1cdlym1/introducing_einstein_v61_based_on_the_new_llama3/ #### ▶️ Youtube Video(s) - [Install Einstein v6.1 Llama3-8B Locally on Windows](https://www.youtube.com/watch?v=VePvv6OM0JY) #### 📱 Octopus-V4-3B - [Octopus-V4-3B](https://huggingface.co/NexaAIDev/Octopus-v4) leverages the incredible physics capabilities of [Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B) in their model. # 🤖 Additional information about training This model is full fine-tuned for 2 epoch. Total number of steps was 2026. <details><summary>Loss graph</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/Ycs7ZpoqmxFt0u9rybCO1.png) </details><br> # 🤝 Acknowledgments Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model. Thanks to all the dataset authors mentioned in the datasets section. Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model. Thanks to all open source AI community. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) If you would like to support me: [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
{"base_model": "meta-llama/Meta-Llama-3-8B", "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval", "allenai/WildChat", "microsoft/orca-math-word-problems-200k", "openchat/openchat_sharegpt4_dataset", "teknium/GPTeacher-General-Instruct", "m-a-p/CodeFeedback-Filtered-Instruction", "totally-not-an-llm/EverythingLM-data-V3", "HuggingFaceH4/no_robots", "OpenAssistant/oasst_top1_2023-08-25", "WizardLM/WizardLM_evol_instruct_70k"], "language": ["en"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama", "llama3"], "model-index": [{"name": "Einstein-v6.1-Llama3-8B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.46, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 82.41, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.19, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 55.1}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 79.32, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.11, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "IFEval (0-Shot)", "type": "HuggingFaceH4/ifeval", "args": {"num_few_shot": 0}}, "metrics": [{"type": "inst_level_strict_acc and prompt_level_strict_acc", "value": 45.68, "name": "strict accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "BBH (3-Shot)", "type": "BBH", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc_norm", "value": 29.38, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MATH Lvl 5 (4-Shot)", "type": "hendrycks/competition_math", "args": {"num_few_shot": 4}}, "metrics": [{"type": "exact_match", "value": 5.74, "name": "exact match"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GPQA (0-shot)", "type": "Idavidrein/gpqa", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 4.25, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MuSR (0-shot)", "type": "TAUR-Lab/MuSR", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 11.23, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU-PRO (5-shot)", "type": "TIGER-Lab/MMLU-Pro", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 23.68, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}]}]}
dataset
null
572
vectoriseai/e5-base
vectoriseai
sentence-similarity
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "mteb", "Sentence Transformers", "sentence-similarity", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-10-11T07:22:59Z
2023-10-11T07:26:25+00:00
8
0
--- language: - en license: mit tags: - mteb - Sentence Transformers - sentence-similarity - sentence-transformers model-index: - name: e5-base results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.71641791044777 - type: ap value: 44.15426065428253 - type: f1 value: 73.89474407693241 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 87.9649 - type: ap value: 84.10171551915973 - type: f1 value: 87.94148377827356 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 42.645999999999994 - type: f1 value: 42.230574673549 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 26.814 - type: map_at_10 value: 42.681999999999995 - type: map_at_100 value: 43.714 - type: map_at_1000 value: 43.724000000000004 - type: map_at_3 value: 38.11 - type: map_at_5 value: 40.666999999999994 - type: mrr_at_1 value: 27.168999999999997 - type: mrr_at_10 value: 42.84 - type: mrr_at_100 value: 43.864 - type: mrr_at_1000 value: 43.875 - type: mrr_at_3 value: 38.193 - type: mrr_at_5 value: 40.793 - type: ndcg_at_1 value: 26.814 - type: ndcg_at_10 value: 51.410999999999994 - type: ndcg_at_100 value: 55.713 - type: ndcg_at_1000 value: 55.957 - type: ndcg_at_3 value: 41.955 - type: ndcg_at_5 value: 46.558 - type: precision_at_1 value: 26.814 - type: precision_at_10 value: 7.922999999999999 - type: precision_at_100 value: 0.9780000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 17.71 - type: precision_at_5 value: 12.859000000000002 - type: recall_at_1 value: 26.814 - type: recall_at_10 value: 79.232 - type: recall_at_100 value: 97.795 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 53.129000000000005 - type: recall_at_5 value: 64.29599999999999 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.56933066536439 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.47647746165173 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.65675531567043 - type: mrr value: 72.95255683067317 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 85.83147014162338 - type: cos_sim_spearman value: 85.1031439521441 - type: euclidean_pearson value: 83.53609085510973 - type: euclidean_spearman value: 84.59650590202833 - type: manhattan_pearson value: 83.14611947586386 - type: manhattan_spearman value: 84.13384475757064 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 83.32792207792208 - type: f1 value: 83.32037485050513 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 36.18605446588703 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.72379130181917 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.659 - type: map_at_10 value: 40.333999999999996 - type: map_at_100 value: 41.763 - type: map_at_1000 value: 41.894 - type: map_at_3 value: 37.561 - type: map_at_5 value: 39.084 - type: mrr_at_1 value: 37.482 - type: mrr_at_10 value: 45.736 - type: mrr_at_100 value: 46.591 - type: mrr_at_1000 value: 46.644999999999996 - type: mrr_at_3 value: 43.491 - type: mrr_at_5 value: 44.75 - type: ndcg_at_1 value: 37.482 - type: ndcg_at_10 value: 45.606 - type: ndcg_at_100 value: 51.172 - type: ndcg_at_1000 value: 53.407000000000004 - type: ndcg_at_3 value: 41.808 - type: ndcg_at_5 value: 43.449 - type: precision_at_1 value: 37.482 - type: precision_at_10 value: 8.254999999999999 - type: precision_at_100 value: 1.3719999999999999 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 19.695 - type: precision_at_5 value: 13.847999999999999 - type: recall_at_1 value: 30.659 - type: recall_at_10 value: 55.409 - type: recall_at_100 value: 78.687 - type: recall_at_1000 value: 93.068 - type: recall_at_3 value: 43.891999999999996 - type: recall_at_5 value: 48.678 - type: map_at_1 value: 30.977 - type: map_at_10 value: 40.296 - type: map_at_100 value: 41.453 - type: map_at_1000 value: 41.581 - type: map_at_3 value: 37.619 - type: map_at_5 value: 39.181 - type: mrr_at_1 value: 39.108 - type: mrr_at_10 value: 46.894000000000005 - type: mrr_at_100 value: 47.55 - type: mrr_at_1000 value: 47.598 - type: mrr_at_3 value: 44.766 - type: mrr_at_5 value: 46.062999999999995 - type: ndcg_at_1 value: 39.108 - type: ndcg_at_10 value: 45.717 - type: ndcg_at_100 value: 49.941 - type: ndcg_at_1000 value: 52.138 - type: ndcg_at_3 value: 42.05 - type: ndcg_at_5 value: 43.893 - type: precision_at_1 value: 39.108 - type: precision_at_10 value: 8.306 - type: precision_at_100 value: 1.3419999999999999 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 19.979 - type: precision_at_5 value: 14.038 - type: recall_at_1 value: 30.977 - type: recall_at_10 value: 54.688 - type: recall_at_100 value: 72.556 - type: recall_at_1000 value: 86.53800000000001 - type: recall_at_3 value: 43.388 - type: recall_at_5 value: 48.717 - type: map_at_1 value: 39.812 - type: map_at_10 value: 50.1 - type: map_at_100 value: 51.193999999999996 - type: map_at_1000 value: 51.258 - type: map_at_3 value: 47.510999999999996 - type: map_at_5 value: 48.891 - type: mrr_at_1 value: 45.266 - type: mrr_at_10 value: 53.459999999999994 - type: mrr_at_100 value: 54.19199999999999 - type: mrr_at_1000 value: 54.228 - type: mrr_at_3 value: 51.296 - type: mrr_at_5 value: 52.495999999999995 - type: ndcg_at_1 value: 45.266 - type: ndcg_at_10 value: 55.034000000000006 - type: ndcg_at_100 value: 59.458 - type: ndcg_at_1000 value: 60.862 - type: ndcg_at_3 value: 50.52799999999999 - type: ndcg_at_5 value: 52.564 - type: precision_at_1 value: 45.266 - type: precision_at_10 value: 8.483 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 21.944 - type: precision_at_5 value: 14.721 - type: recall_at_1 value: 39.812 - type: recall_at_10 value: 66.36 - type: recall_at_100 value: 85.392 - type: recall_at_1000 value: 95.523 - type: recall_at_3 value: 54.127 - type: recall_at_5 value: 59.245000000000005 - type: map_at_1 value: 26.186 - type: map_at_10 value: 33.18 - type: map_at_100 value: 34.052 - type: map_at_1000 value: 34.149 - type: map_at_3 value: 31.029 - type: map_at_5 value: 32.321 - type: mrr_at_1 value: 28.136 - type: mrr_at_10 value: 35.195 - type: mrr_at_100 value: 35.996 - type: mrr_at_1000 value: 36.076 - type: mrr_at_3 value: 33.051 - type: mrr_at_5 value: 34.407 - type: ndcg_at_1 value: 28.136 - type: ndcg_at_10 value: 37.275999999999996 - type: ndcg_at_100 value: 41.935 - type: ndcg_at_1000 value: 44.389 - type: ndcg_at_3 value: 33.059 - type: ndcg_at_5 value: 35.313 - type: precision_at_1 value: 28.136 - type: precision_at_10 value: 5.457999999999999 - type: precision_at_100 value: 0.826 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 13.522 - type: precision_at_5 value: 9.424000000000001 - type: recall_at_1 value: 26.186 - type: recall_at_10 value: 47.961999999999996 - type: recall_at_100 value: 70.072 - type: recall_at_1000 value: 88.505 - type: recall_at_3 value: 36.752 - type: recall_at_5 value: 42.168 - type: map_at_1 value: 16.586000000000002 - type: map_at_10 value: 23.637 - type: map_at_100 value: 24.82 - type: map_at_1000 value: 24.95 - type: map_at_3 value: 21.428 - type: map_at_5 value: 22.555 - type: mrr_at_1 value: 20.771 - type: mrr_at_10 value: 27.839999999999996 - type: mrr_at_100 value: 28.887 - type: mrr_at_1000 value: 28.967 - type: mrr_at_3 value: 25.56 - type: mrr_at_5 value: 26.723000000000003 - type: ndcg_at_1 value: 20.771 - type: ndcg_at_10 value: 28.255000000000003 - type: ndcg_at_100 value: 33.886 - type: ndcg_at_1000 value: 36.963 - type: ndcg_at_3 value: 24.056 - type: ndcg_at_5 value: 25.818 - type: precision_at_1 value: 20.771 - type: precision_at_10 value: 5.1 - type: precision_at_100 value: 0.9119999999999999 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 11.526 - type: precision_at_5 value: 8.158999999999999 - type: recall_at_1 value: 16.586000000000002 - type: recall_at_10 value: 38.456 - type: recall_at_100 value: 62.666 - type: recall_at_1000 value: 84.47 - type: recall_at_3 value: 26.765 - type: recall_at_5 value: 31.297000000000004 - type: map_at_1 value: 28.831 - type: map_at_10 value: 37.545 - type: map_at_100 value: 38.934999999999995 - type: map_at_1000 value: 39.044000000000004 - type: map_at_3 value: 34.601 - type: map_at_5 value: 36.302 - type: mrr_at_1 value: 34.264 - type: mrr_at_10 value: 42.569 - type: mrr_at_100 value: 43.514 - type: mrr_at_1000 value: 43.561 - type: mrr_at_3 value: 40.167 - type: mrr_at_5 value: 41.678 - type: ndcg_at_1 value: 34.264 - type: ndcg_at_10 value: 42.914 - type: ndcg_at_100 value: 48.931999999999995 - type: ndcg_at_1000 value: 51.004000000000005 - type: ndcg_at_3 value: 38.096999999999994 - type: ndcg_at_5 value: 40.509 - type: precision_at_1 value: 34.264 - type: precision_at_10 value: 7.642 - type: precision_at_100 value: 1.258 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 17.453 - type: precision_at_5 value: 12.608 - type: recall_at_1 value: 28.831 - type: recall_at_10 value: 53.56999999999999 - type: recall_at_100 value: 79.26100000000001 - type: recall_at_1000 value: 92.862 - type: recall_at_3 value: 40.681 - type: recall_at_5 value: 46.597 - type: map_at_1 value: 27.461000000000002 - type: map_at_10 value: 35.885 - type: map_at_100 value: 37.039 - type: map_at_1000 value: 37.16 - type: map_at_3 value: 33.451 - type: map_at_5 value: 34.807 - type: mrr_at_1 value: 34.018 - type: mrr_at_10 value: 41.32 - type: mrr_at_100 value: 42.157 - type: mrr_at_1000 value: 42.223 - type: mrr_at_3 value: 39.288000000000004 - type: mrr_at_5 value: 40.481 - type: ndcg_at_1 value: 34.018 - type: ndcg_at_10 value: 40.821000000000005 - type: ndcg_at_100 value: 46.053 - type: ndcg_at_1000 value: 48.673 - type: ndcg_at_3 value: 36.839 - type: ndcg_at_5 value: 38.683 - type: precision_at_1 value: 34.018 - type: precision_at_10 value: 7.009 - type: precision_at_100 value: 1.123 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 16.933 - type: precision_at_5 value: 11.826 - type: recall_at_1 value: 27.461000000000002 - type: recall_at_10 value: 50.285000000000004 - type: recall_at_100 value: 73.25500000000001 - type: recall_at_1000 value: 91.17699999999999 - type: recall_at_3 value: 39.104 - type: recall_at_5 value: 43.968 - type: map_at_1 value: 26.980083333333337 - type: map_at_10 value: 34.47208333333333 - type: map_at_100 value: 35.609249999999996 - type: map_at_1000 value: 35.72833333333333 - type: map_at_3 value: 32.189416666666666 - type: map_at_5 value: 33.44683333333334 - type: mrr_at_1 value: 31.731666666666662 - type: mrr_at_10 value: 38.518 - type: mrr_at_100 value: 39.38166666666667 - type: mrr_at_1000 value: 39.446999999999996 - type: mrr_at_3 value: 36.49966666666668 - type: mrr_at_5 value: 37.639916666666664 - type: ndcg_at_1 value: 31.731666666666662 - type: ndcg_at_10 value: 38.92033333333333 - type: ndcg_at_100 value: 44.01675 - type: ndcg_at_1000 value: 46.51075 - type: ndcg_at_3 value: 35.09766666666667 - type: ndcg_at_5 value: 36.842999999999996 - type: precision_at_1 value: 31.731666666666662 - type: precision_at_10 value: 6.472583333333332 - type: precision_at_100 value: 1.0665 - type: precision_at_1000 value: 0.14725000000000002 - type: precision_at_3 value: 15.659083333333331 - type: precision_at_5 value: 10.878833333333333 - type: recall_at_1 value: 26.980083333333337 - type: recall_at_10 value: 48.13925 - type: recall_at_100 value: 70.70149999999998 - type: recall_at_1000 value: 88.10775000000001 - type: recall_at_3 value: 37.30091666666667 - type: recall_at_5 value: 41.90358333333333 - type: map_at_1 value: 25.607999999999997 - type: map_at_10 value: 30.523 - type: map_at_100 value: 31.409 - type: map_at_1000 value: 31.507 - type: map_at_3 value: 28.915000000000003 - type: map_at_5 value: 29.756 - type: mrr_at_1 value: 28.681 - type: mrr_at_10 value: 33.409 - type: mrr_at_100 value: 34.241 - type: mrr_at_1000 value: 34.313 - type: mrr_at_3 value: 32.029999999999994 - type: mrr_at_5 value: 32.712 - type: ndcg_at_1 value: 28.681 - type: ndcg_at_10 value: 33.733000000000004 - type: ndcg_at_100 value: 38.32 - type: ndcg_at_1000 value: 40.937 - type: ndcg_at_3 value: 30.898999999999997 - type: ndcg_at_5 value: 32.088 - type: precision_at_1 value: 28.681 - type: precision_at_10 value: 4.968999999999999 - type: precision_at_100 value: 0.79 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 12.73 - type: precision_at_5 value: 8.558 - type: recall_at_1 value: 25.607999999999997 - type: recall_at_10 value: 40.722 - type: recall_at_100 value: 61.956999999999994 - type: recall_at_1000 value: 81.43 - type: recall_at_3 value: 32.785 - type: recall_at_5 value: 35.855 - type: map_at_1 value: 20.399 - type: map_at_10 value: 25.968000000000004 - type: map_at_100 value: 26.985999999999997 - type: map_at_1000 value: 27.105 - type: map_at_3 value: 24.215 - type: map_at_5 value: 25.157 - type: mrr_at_1 value: 24.708 - type: mrr_at_10 value: 29.971999999999998 - type: mrr_at_100 value: 30.858 - type: mrr_at_1000 value: 30.934 - type: mrr_at_3 value: 28.304000000000002 - type: mrr_at_5 value: 29.183999999999997 - type: ndcg_at_1 value: 24.708 - type: ndcg_at_10 value: 29.676000000000002 - type: ndcg_at_100 value: 34.656 - type: ndcg_at_1000 value: 37.588 - type: ndcg_at_3 value: 26.613 - type: ndcg_at_5 value: 27.919 - type: precision_at_1 value: 24.708 - type: precision_at_10 value: 5.01 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 11.975 - type: precision_at_5 value: 8.279 - type: recall_at_1 value: 20.399 - type: recall_at_10 value: 36.935 - type: recall_at_100 value: 59.532 - type: recall_at_1000 value: 80.58 - type: recall_at_3 value: 27.979 - type: recall_at_5 value: 31.636999999999997 - type: map_at_1 value: 27.606 - type: map_at_10 value: 34.213 - type: map_at_100 value: 35.339999999999996 - type: map_at_1000 value: 35.458 - type: map_at_3 value: 31.987 - type: map_at_5 value: 33.322 - type: mrr_at_1 value: 31.53 - type: mrr_at_10 value: 37.911 - type: mrr_at_100 value: 38.879000000000005 - type: mrr_at_1000 value: 38.956 - type: mrr_at_3 value: 35.868 - type: mrr_at_5 value: 37.047999999999995 - type: ndcg_at_1 value: 31.53 - type: ndcg_at_10 value: 38.312000000000005 - type: ndcg_at_100 value: 43.812 - type: ndcg_at_1000 value: 46.414 - type: ndcg_at_3 value: 34.319 - type: ndcg_at_5 value: 36.312 - type: precision_at_1 value: 31.53 - type: precision_at_10 value: 5.970000000000001 - type: precision_at_100 value: 0.9939999999999999 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 14.738999999999999 - type: precision_at_5 value: 10.242999999999999 - type: recall_at_1 value: 27.606 - type: recall_at_10 value: 47.136 - type: recall_at_100 value: 71.253 - type: recall_at_1000 value: 89.39399999999999 - type: recall_at_3 value: 36.342 - type: recall_at_5 value: 41.388999999999996 - type: map_at_1 value: 24.855 - type: map_at_10 value: 31.963 - type: map_at_100 value: 33.371 - type: map_at_1000 value: 33.584 - type: map_at_3 value: 29.543999999999997 - type: map_at_5 value: 30.793 - type: mrr_at_1 value: 29.644 - type: mrr_at_10 value: 35.601 - type: mrr_at_100 value: 36.551 - type: mrr_at_1000 value: 36.623 - type: mrr_at_3 value: 33.399 - type: mrr_at_5 value: 34.575 - type: ndcg_at_1 value: 29.644 - type: ndcg_at_10 value: 36.521 - type: ndcg_at_100 value: 42.087 - type: ndcg_at_1000 value: 45.119 - type: ndcg_at_3 value: 32.797 - type: ndcg_at_5 value: 34.208 - type: precision_at_1 value: 29.644 - type: precision_at_10 value: 6.7 - type: precision_at_100 value: 1.374 - type: precision_at_1000 value: 0.22899999999999998 - type: precision_at_3 value: 15.152 - type: precision_at_5 value: 10.671999999999999 - type: recall_at_1 value: 24.855 - type: recall_at_10 value: 45.449 - type: recall_at_100 value: 70.921 - type: recall_at_1000 value: 90.629 - type: recall_at_3 value: 33.526 - type: recall_at_5 value: 37.848 - type: map_at_1 value: 24.781 - type: map_at_10 value: 30.020999999999997 - type: map_at_100 value: 30.948999999999998 - type: map_at_1000 value: 31.05 - type: map_at_3 value: 28.412 - type: map_at_5 value: 29.193 - type: mrr_at_1 value: 27.172 - type: mrr_at_10 value: 32.309 - type: mrr_at_100 value: 33.164 - type: mrr_at_1000 value: 33.239999999999995 - type: mrr_at_3 value: 30.775999999999996 - type: mrr_at_5 value: 31.562 - type: ndcg_at_1 value: 27.172 - type: ndcg_at_10 value: 33.178999999999995 - type: ndcg_at_100 value: 37.949 - type: ndcg_at_1000 value: 40.635 - type: ndcg_at_3 value: 30.107 - type: ndcg_at_5 value: 31.36 - type: precision_at_1 value: 27.172 - type: precision_at_10 value: 4.769 - type: precision_at_100 value: 0.769 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 12.261 - type: precision_at_5 value: 8.17 - type: recall_at_1 value: 24.781 - type: recall_at_10 value: 40.699000000000005 - type: recall_at_100 value: 62.866 - type: recall_at_1000 value: 83.11699999999999 - type: recall_at_3 value: 32.269999999999996 - type: recall_at_5 value: 35.443999999999996 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 5.2139999999999995 - type: map_at_10 value: 9.986 - type: map_at_100 value: 11.343 - type: map_at_1000 value: 11.55 - type: map_at_3 value: 7.961 - type: map_at_5 value: 8.967 - type: mrr_at_1 value: 12.052 - type: mrr_at_10 value: 20.165 - type: mrr_at_100 value: 21.317 - type: mrr_at_1000 value: 21.399 - type: mrr_at_3 value: 17.079 - type: mrr_at_5 value: 18.695 - type: ndcg_at_1 value: 12.052 - type: ndcg_at_10 value: 15.375 - type: ndcg_at_100 value: 21.858 - type: ndcg_at_1000 value: 26.145000000000003 - type: ndcg_at_3 value: 11.334 - type: ndcg_at_5 value: 12.798000000000002 - type: precision_at_1 value: 12.052 - type: precision_at_10 value: 5.16 - type: precision_at_100 value: 1.206 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 8.73 - type: precision_at_5 value: 7.114 - type: recall_at_1 value: 5.2139999999999995 - type: recall_at_10 value: 20.669999999999998 - type: recall_at_100 value: 43.901 - type: recall_at_1000 value: 68.447 - type: recall_at_3 value: 11.049000000000001 - type: recall_at_5 value: 14.652999999999999 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.511000000000001 - type: map_at_10 value: 19.503 - type: map_at_100 value: 27.46 - type: map_at_1000 value: 29.187 - type: map_at_3 value: 14.030999999999999 - type: map_at_5 value: 16.329 - type: mrr_at_1 value: 63.74999999999999 - type: mrr_at_10 value: 73.419 - type: mrr_at_100 value: 73.691 - type: mrr_at_1000 value: 73.697 - type: mrr_at_3 value: 71.792 - type: mrr_at_5 value: 72.979 - type: ndcg_at_1 value: 53.125 - type: ndcg_at_10 value: 41.02 - type: ndcg_at_100 value: 45.407 - type: ndcg_at_1000 value: 52.68000000000001 - type: ndcg_at_3 value: 46.088 - type: ndcg_at_5 value: 43.236000000000004 - type: precision_at_1 value: 63.74999999999999 - type: precision_at_10 value: 32.35 - type: precision_at_100 value: 10.363 - type: precision_at_1000 value: 2.18 - type: precision_at_3 value: 49.667 - type: precision_at_5 value: 41.5 - type: recall_at_1 value: 8.511000000000001 - type: recall_at_10 value: 24.851 - type: recall_at_100 value: 50.745 - type: recall_at_1000 value: 73.265 - type: recall_at_3 value: 15.716 - type: recall_at_5 value: 19.256 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 49.43500000000001 - type: f1 value: 44.56288273966374 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 40.858 - type: map_at_10 value: 52.276 - type: map_at_100 value: 52.928 - type: map_at_1000 value: 52.966 - type: map_at_3 value: 49.729 - type: map_at_5 value: 51.27 - type: mrr_at_1 value: 43.624 - type: mrr_at_10 value: 55.22899999999999 - type: mrr_at_100 value: 55.823 - type: mrr_at_1000 value: 55.85 - type: mrr_at_3 value: 52.739999999999995 - type: mrr_at_5 value: 54.251000000000005 - type: ndcg_at_1 value: 43.624 - type: ndcg_at_10 value: 58.23500000000001 - type: ndcg_at_100 value: 61.315 - type: ndcg_at_1000 value: 62.20099999999999 - type: ndcg_at_3 value: 53.22 - type: ndcg_at_5 value: 55.88999999999999 - type: precision_at_1 value: 43.624 - type: precision_at_10 value: 8.068999999999999 - type: precision_at_100 value: 0.975 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 21.752 - type: precision_at_5 value: 14.515 - type: recall_at_1 value: 40.858 - type: recall_at_10 value: 73.744 - type: recall_at_100 value: 87.667 - type: recall_at_1000 value: 94.15599999999999 - type: recall_at_3 value: 60.287 - type: recall_at_5 value: 66.703 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 17.864 - type: map_at_10 value: 28.592000000000002 - type: map_at_100 value: 30.165 - type: map_at_1000 value: 30.364 - type: map_at_3 value: 24.586 - type: map_at_5 value: 26.717000000000002 - type: mrr_at_1 value: 35.031 - type: mrr_at_10 value: 43.876 - type: mrr_at_100 value: 44.683 - type: mrr_at_1000 value: 44.736 - type: mrr_at_3 value: 40.998000000000005 - type: mrr_at_5 value: 42.595 - type: ndcg_at_1 value: 35.031 - type: ndcg_at_10 value: 36.368 - type: ndcg_at_100 value: 42.472 - type: ndcg_at_1000 value: 45.973000000000006 - type: ndcg_at_3 value: 31.915 - type: ndcg_at_5 value: 33.394 - type: precision_at_1 value: 35.031 - type: precision_at_10 value: 10.139 - type: precision_at_100 value: 1.6420000000000001 - type: precision_at_1000 value: 0.22699999999999998 - type: precision_at_3 value: 21.142 - type: precision_at_5 value: 15.772 - type: recall_at_1 value: 17.864 - type: recall_at_10 value: 43.991 - type: recall_at_100 value: 66.796 - type: recall_at_1000 value: 87.64 - type: recall_at_3 value: 28.915999999999997 - type: recall_at_5 value: 35.185 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.556 - type: map_at_10 value: 53.056000000000004 - type: map_at_100 value: 53.909 - type: map_at_1000 value: 53.98 - type: map_at_3 value: 49.982 - type: map_at_5 value: 51.9 - type: mrr_at_1 value: 73.113 - type: mrr_at_10 value: 79.381 - type: mrr_at_100 value: 79.60300000000001 - type: mrr_at_1000 value: 79.617 - type: mrr_at_3 value: 78.298 - type: mrr_at_5 value: 78.995 - type: ndcg_at_1 value: 73.113 - type: ndcg_at_10 value: 62.21 - type: ndcg_at_100 value: 65.242 - type: ndcg_at_1000 value: 66.667 - type: ndcg_at_3 value: 57.717 - type: ndcg_at_5 value: 60.224 - type: precision_at_1 value: 73.113 - type: precision_at_10 value: 12.842999999999998 - type: precision_at_100 value: 1.522 - type: precision_at_1000 value: 0.17099999999999999 - type: precision_at_3 value: 36.178 - type: precision_at_5 value: 23.695 - type: recall_at_1 value: 36.556 - type: recall_at_10 value: 64.213 - type: recall_at_100 value: 76.077 - type: recall_at_1000 value: 85.53699999999999 - type: recall_at_3 value: 54.266999999999996 - type: recall_at_5 value: 59.236999999999995 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 75.958 - type: ap value: 69.82869527654348 - type: f1 value: 75.89120903005633 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 23.608 - type: map_at_10 value: 36.144 - type: map_at_100 value: 37.244 - type: map_at_1000 value: 37.291999999999994 - type: map_at_3 value: 32.287 - type: map_at_5 value: 34.473 - type: mrr_at_1 value: 24.226 - type: mrr_at_10 value: 36.711 - type: mrr_at_100 value: 37.758 - type: mrr_at_1000 value: 37.8 - type: mrr_at_3 value: 32.92 - type: mrr_at_5 value: 35.104 - type: ndcg_at_1 value: 24.269 - type: ndcg_at_10 value: 43.138 - type: ndcg_at_100 value: 48.421 - type: ndcg_at_1000 value: 49.592000000000006 - type: ndcg_at_3 value: 35.269 - type: ndcg_at_5 value: 39.175 - type: precision_at_1 value: 24.269 - type: precision_at_10 value: 6.755999999999999 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.938 - type: precision_at_5 value: 10.934000000000001 - type: recall_at_1 value: 23.608 - type: recall_at_10 value: 64.679 - type: recall_at_100 value: 89.027 - type: recall_at_1000 value: 97.91 - type: recall_at_3 value: 43.25 - type: recall_at_5 value: 52.617000000000004 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.21477428180576 - type: f1 value: 92.92502305092152 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.76744186046511 - type: f1 value: 59.19855520057899 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.24613315400134 - type: f1 value: 70.19950395651232 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.75857431069268 - type: f1 value: 76.5433450230191 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.525463791623604 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.28695907385136 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.068174046665224 - type: mrr value: 30.827586642840803 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.322 - type: map_at_10 value: 13.919999999999998 - type: map_at_100 value: 17.416 - type: map_at_1000 value: 18.836 - type: map_at_3 value: 10.111 - type: map_at_5 value: 11.991999999999999 - type: mrr_at_1 value: 48.297000000000004 - type: mrr_at_10 value: 57.114 - type: mrr_at_100 value: 57.713 - type: mrr_at_1000 value: 57.751 - type: mrr_at_3 value: 55.108000000000004 - type: mrr_at_5 value: 56.533 - type: ndcg_at_1 value: 46.44 - type: ndcg_at_10 value: 36.589 - type: ndcg_at_100 value: 33.202 - type: ndcg_at_1000 value: 41.668 - type: ndcg_at_3 value: 41.302 - type: ndcg_at_5 value: 39.829 - type: precision_at_1 value: 47.988 - type: precision_at_10 value: 27.059 - type: precision_at_100 value: 8.235000000000001 - type: precision_at_1000 value: 2.091 - type: precision_at_3 value: 38.184000000000005 - type: precision_at_5 value: 34.365 - type: recall_at_1 value: 6.322 - type: recall_at_10 value: 18.288 - type: recall_at_100 value: 32.580999999999996 - type: recall_at_1000 value: 63.605999999999995 - type: recall_at_3 value: 11.266 - type: recall_at_5 value: 14.69 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 36.586999999999996 - type: map_at_10 value: 52.464 - type: map_at_100 value: 53.384 - type: map_at_1000 value: 53.405 - type: map_at_3 value: 48.408 - type: map_at_5 value: 50.788999999999994 - type: mrr_at_1 value: 40.904 - type: mrr_at_10 value: 54.974000000000004 - type: mrr_at_100 value: 55.60699999999999 - type: mrr_at_1000 value: 55.623 - type: mrr_at_3 value: 51.73799999999999 - type: mrr_at_5 value: 53.638 - type: ndcg_at_1 value: 40.904 - type: ndcg_at_10 value: 59.965999999999994 - type: ndcg_at_100 value: 63.613 - type: ndcg_at_1000 value: 64.064 - type: ndcg_at_3 value: 52.486 - type: ndcg_at_5 value: 56.377 - type: precision_at_1 value: 40.904 - type: precision_at_10 value: 9.551 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.552 - type: precision_at_5 value: 16.436999999999998 - type: recall_at_1 value: 36.586999999999996 - type: recall_at_10 value: 80.094 - type: recall_at_100 value: 95.515 - type: recall_at_1000 value: 98.803 - type: recall_at_3 value: 60.907 - type: recall_at_5 value: 69.817 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.422 - type: map_at_10 value: 84.113 - type: map_at_100 value: 84.744 - type: map_at_1000 value: 84.762 - type: map_at_3 value: 81.171 - type: map_at_5 value: 83.039 - type: mrr_at_1 value: 81.12 - type: mrr_at_10 value: 87.277 - type: mrr_at_100 value: 87.384 - type: mrr_at_1000 value: 87.385 - type: mrr_at_3 value: 86.315 - type: mrr_at_5 value: 86.981 - type: ndcg_at_1 value: 81.12 - type: ndcg_at_10 value: 87.92 - type: ndcg_at_100 value: 89.178 - type: ndcg_at_1000 value: 89.29899999999999 - type: ndcg_at_3 value: 85.076 - type: ndcg_at_5 value: 86.67099999999999 - type: precision_at_1 value: 81.12 - type: precision_at_10 value: 13.325999999999999 - type: precision_at_100 value: 1.524 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.16 - type: precision_at_5 value: 24.456 - type: recall_at_1 value: 70.422 - type: recall_at_10 value: 95.00800000000001 - type: recall_at_100 value: 99.38 - type: recall_at_1000 value: 99.94800000000001 - type: recall_at_3 value: 86.809 - type: recall_at_5 value: 91.334 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 48.18491891699636 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.190639679711914 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.478 - type: map_at_10 value: 11.268 - type: map_at_100 value: 13.129 - type: map_at_1000 value: 13.41 - type: map_at_3 value: 8.103 - type: map_at_5 value: 9.609 - type: mrr_at_1 value: 22 - type: mrr_at_10 value: 32.248 - type: mrr_at_100 value: 33.355000000000004 - type: mrr_at_1000 value: 33.42 - type: mrr_at_3 value: 29.15 - type: mrr_at_5 value: 30.785 - type: ndcg_at_1 value: 22 - type: ndcg_at_10 value: 18.990000000000002 - type: ndcg_at_100 value: 26.302999999999997 - type: ndcg_at_1000 value: 31.537 - type: ndcg_at_3 value: 18.034 - type: ndcg_at_5 value: 15.655 - type: precision_at_1 value: 22 - type: precision_at_10 value: 9.91 - type: precision_at_100 value: 2.0420000000000003 - type: precision_at_1000 value: 0.33 - type: precision_at_3 value: 16.933 - type: precision_at_5 value: 13.719999999999999 - type: recall_at_1 value: 4.478 - type: recall_at_10 value: 20.087 - type: recall_at_100 value: 41.457 - type: recall_at_1000 value: 67.10199999999999 - type: recall_at_3 value: 10.313 - type: recall_at_5 value: 13.927999999999999 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.27341574565806 - type: cos_sim_spearman value: 79.66419880841734 - type: euclidean_pearson value: 81.32473321838208 - type: euclidean_spearman value: 79.29828832085133 - type: manhattan_pearson value: 81.25554065883132 - type: manhattan_spearman value: 79.23275543279853 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.40468875905418 - type: cos_sim_spearman value: 74.2189990321174 - type: euclidean_pearson value: 80.74376966290956 - type: euclidean_spearman value: 74.97663839079335 - type: manhattan_pearson value: 80.69779331646207 - type: manhattan_spearman value: 75.00225252917613 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.5745290053095 - type: cos_sim_spearman value: 83.31401180333397 - type: euclidean_pearson value: 82.96500607325534 - type: euclidean_spearman value: 83.8534967935793 - type: manhattan_pearson value: 82.83112050632508 - type: manhattan_spearman value: 83.70877296557838 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 80.67833656607704 - type: cos_sim_spearman value: 78.52252410630707 - type: euclidean_pearson value: 80.071189514343 - type: euclidean_spearman value: 78.95143545742796 - type: manhattan_pearson value: 80.0128926165121 - type: manhattan_spearman value: 78.91236678732628 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.48437639980746 - type: cos_sim_spearman value: 88.34876527774259 - type: euclidean_pearson value: 87.64898081823888 - type: euclidean_spearman value: 88.58937180804213 - type: manhattan_pearson value: 87.5942417815288 - type: manhattan_spearman value: 88.53013922267687 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.69189187164781 - type: cos_sim_spearman value: 84.15327883572112 - type: euclidean_pearson value: 83.64202266685898 - type: euclidean_spearman value: 84.6219602318862 - type: manhattan_pearson value: 83.53256698709998 - type: manhattan_spearman value: 84.49260712904946 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.09508017611589 - type: cos_sim_spearman value: 87.23010990417097 - type: euclidean_pearson value: 87.62545569077133 - type: euclidean_spearman value: 86.71152051711714 - type: manhattan_pearson value: 87.5057154278377 - type: manhattan_spearman value: 86.60611898281267 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 61.72129893941176 - type: cos_sim_spearman value: 62.87871412069194 - type: euclidean_pearson value: 63.21077648290454 - type: euclidean_spearman value: 63.03263080805978 - type: manhattan_pearson value: 63.20740860135976 - type: manhattan_spearman value: 62.89930471802817 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.039118236799 - type: cos_sim_spearman value: 86.18102563389962 - type: euclidean_pearson value: 85.62977041471879 - type: euclidean_spearman value: 86.02478990544347 - type: manhattan_pearson value: 85.60786740521806 - type: manhattan_spearman value: 85.99546210442547 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.89875069737266 - type: mrr value: 95.42621322033087 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 58.660999999999994 - type: map_at_10 value: 68.738 - type: map_at_100 value: 69.33200000000001 - type: map_at_1000 value: 69.352 - type: map_at_3 value: 66.502 - type: map_at_5 value: 67.686 - type: mrr_at_1 value: 61.667 - type: mrr_at_10 value: 70.003 - type: mrr_at_100 value: 70.441 - type: mrr_at_1000 value: 70.46 - type: mrr_at_3 value: 68.278 - type: mrr_at_5 value: 69.194 - type: ndcg_at_1 value: 61.667 - type: ndcg_at_10 value: 73.083 - type: ndcg_at_100 value: 75.56 - type: ndcg_at_1000 value: 76.01400000000001 - type: ndcg_at_3 value: 69.28699999999999 - type: ndcg_at_5 value: 70.85000000000001 - type: precision_at_1 value: 61.667 - type: precision_at_10 value: 9.6 - type: precision_at_100 value: 1.087 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 27.111 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 58.660999999999994 - type: recall_at_10 value: 85.02199999999999 - type: recall_at_100 value: 95.933 - type: recall_at_1000 value: 99.333 - type: recall_at_3 value: 74.506 - type: recall_at_5 value: 78.583 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8029702970297 - type: cos_sim_ap value: 94.87673936635738 - type: cos_sim_f1 value: 90.00502260170768 - type: cos_sim_precision value: 90.41372351160445 - type: cos_sim_recall value: 89.60000000000001 - type: dot_accuracy value: 99.57524752475247 - type: dot_ap value: 84.81717934496321 - type: dot_f1 value: 78.23026646556059 - type: dot_precision value: 78.66531850353893 - type: dot_recall value: 77.8 - type: euclidean_accuracy value: 99.8029702970297 - type: euclidean_ap value: 94.74658253135284 - type: euclidean_f1 value: 90.08470353761834 - type: euclidean_precision value: 89.77159880834161 - type: euclidean_recall value: 90.4 - type: manhattan_accuracy value: 99.8 - type: manhattan_ap value: 94.69224030742787 - type: manhattan_f1 value: 89.9502487562189 - type: manhattan_precision value: 89.50495049504951 - type: manhattan_recall value: 90.4 - type: max_accuracy value: 99.8029702970297 - type: max_ap value: 94.87673936635738 - type: max_f1 value: 90.08470353761834 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 63.906039623153035 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.56053830923281 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.15326538775145 - type: mrr value: 50.99279295051355 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.44030762047337 - type: cos_sim_spearman value: 31.00910300264562 - type: dot_pearson value: 26.88257194766013 - type: dot_spearman value: 27.646202679013577 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.247 - type: map_at_10 value: 1.9429999999999998 - type: map_at_100 value: 10.82 - type: map_at_1000 value: 25.972 - type: map_at_3 value: 0.653 - type: map_at_5 value: 1.057 - type: mrr_at_1 value: 94 - type: mrr_at_10 value: 96.333 - type: mrr_at_100 value: 96.333 - type: mrr_at_1000 value: 96.333 - type: mrr_at_3 value: 96.333 - type: mrr_at_5 value: 96.333 - type: ndcg_at_1 value: 89 - type: ndcg_at_10 value: 79.63799999999999 - type: ndcg_at_100 value: 57.961 - type: ndcg_at_1000 value: 50.733 - type: ndcg_at_3 value: 84.224 - type: ndcg_at_5 value: 82.528 - type: precision_at_1 value: 94 - type: precision_at_10 value: 84.2 - type: precision_at_100 value: 59.36 - type: precision_at_1000 value: 22.738 - type: precision_at_3 value: 88 - type: precision_at_5 value: 86.8 - type: recall_at_1 value: 0.247 - type: recall_at_10 value: 2.131 - type: recall_at_100 value: 14.035 - type: recall_at_1000 value: 47.457 - type: recall_at_3 value: 0.6779999999999999 - type: recall_at_5 value: 1.124 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.603 - type: map_at_10 value: 11.667 - type: map_at_100 value: 16.474 - type: map_at_1000 value: 18.074 - type: map_at_3 value: 6.03 - type: map_at_5 value: 8.067 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 51.063 - type: mrr_at_100 value: 51.908 - type: mrr_at_1000 value: 51.908 - type: mrr_at_3 value: 47.959 - type: mrr_at_5 value: 49.694 - type: ndcg_at_1 value: 32.653 - type: ndcg_at_10 value: 28.305000000000003 - type: ndcg_at_100 value: 35.311 - type: ndcg_at_1000 value: 47.644999999999996 - type: ndcg_at_3 value: 32.187 - type: ndcg_at_5 value: 29.134999999999998 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 26.122 - type: precision_at_100 value: 6.755 - type: precision_at_1000 value: 1.467 - type: precision_at_3 value: 34.694 - type: precision_at_5 value: 30.203999999999997 - type: recall_at_1 value: 2.603 - type: recall_at_10 value: 18.716 - type: recall_at_100 value: 42.512 - type: recall_at_1000 value: 79.32000000000001 - type: recall_at_3 value: 7.59 - type: recall_at_5 value: 10.949 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 74.117 - type: ap value: 15.89357321699319 - type: f1 value: 57.14385866369257 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.38370118845502 - type: f1 value: 61.67038693866553 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 42.57754941537969 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.1775049174465 - type: cos_sim_ap value: 74.3994879581554 - type: cos_sim_f1 value: 69.32903671308551 - type: cos_sim_precision value: 61.48193508879363 - type: cos_sim_recall value: 79.47229551451187 - type: dot_accuracy value: 81.65345413363534 - type: dot_ap value: 59.690898346685096 - type: dot_f1 value: 57.27622826467499 - type: dot_precision value: 51.34965473948525 - type: dot_recall value: 64.74934036939314 - type: euclidean_accuracy value: 86.04637301066937 - type: euclidean_ap value: 74.33009001775268 - type: euclidean_f1 value: 69.2458374142997 - type: euclidean_precision value: 64.59570580173595 - type: euclidean_recall value: 74.6174142480211 - type: manhattan_accuracy value: 86.11193896405793 - type: manhattan_ap value: 74.2964140130421 - type: manhattan_f1 value: 69.11601528788066 - type: manhattan_precision value: 64.86924323073363 - type: manhattan_recall value: 73.95778364116094 - type: max_accuracy value: 86.1775049174465 - type: max_ap value: 74.3994879581554 - type: max_f1 value: 69.32903671308551 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01501921061823 - type: cos_sim_ap value: 85.97819287477351 - type: cos_sim_f1 value: 78.33882858518875 - type: cos_sim_precision value: 75.49446626204926 - type: cos_sim_recall value: 81.40591315060055 - type: dot_accuracy value: 86.47494857763806 - type: dot_ap value: 78.77420360340282 - type: dot_f1 value: 73.06433247936238 - type: dot_precision value: 67.92140777983595 - type: dot_recall value: 79.04989220819218 - type: euclidean_accuracy value: 88.7297706368611 - type: euclidean_ap value: 85.61550568529317 - type: euclidean_f1 value: 77.84805525263539 - type: euclidean_precision value: 73.73639994491117 - type: euclidean_recall value: 82.44533415460425 - type: manhattan_accuracy value: 88.75111576823068 - type: manhattan_ap value: 85.58701671476263 - type: manhattan_f1 value: 77.70169909067856 - type: manhattan_precision value: 73.37666780704755 - type: manhattan_recall value: 82.5685247921158 - type: max_accuracy value: 89.01501921061823 - type: max_ap value: 85.97819287477351 - type: max_f1 value: 78.33882858518875 --- ## E5-base **News (May 2023): please switch to [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2), which has better performance and same method of usage.** [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 768. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base') model = AutoModel.from_pretrained('intfloat/e5-base') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-base') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
## E5-base **News (May 2023): please switch to [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2), which has better performance and same method of usage.** [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 768. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base') model = AutoModel.from_pretrained('intfloat/e5-base') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-base') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
{"language": ["en"], "license": "mit", "tags": ["mteb", "Sentence Transformers", "sentence-similarity", "sentence-transformers"], "model-index": [{"name": "e5-base", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 79.71641791044777}, {"type": "ap", "value": 44.15426065428253}, {"type": "f1", "value": 73.89474407693241}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 87.9649}, {"type": "ap", "value": 84.10171551915973}, {"type": "f1", "value": 87.94148377827356}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 42.645999999999994}, {"type": "f1", "value": 42.230574673549}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 26.814}, {"type": "map_at_10", "value": 42.681999999999995}, {"type": "map_at_100", "value": 43.714}, {"type": "map_at_1000", "value": 43.724000000000004}, {"type": "map_at_3", "value": 38.11}, {"type": "map_at_5", "value": 40.666999999999994}, {"type": "mrr_at_1", "value": 27.168999999999997}, {"type": "mrr_at_10", "value": 42.84}, {"type": "mrr_at_100", "value": 43.864}, {"type": "mrr_at_1000", "value": 43.875}, {"type": "mrr_at_3", "value": 38.193}, {"type": "mrr_at_5", "value": 40.793}, {"type": "ndcg_at_1", "value": 26.814}, {"type": "ndcg_at_10", "value": 51.410999999999994}, {"type": "ndcg_at_100", "value": 55.713}, {"type": "ndcg_at_1000", "value": 55.957}, {"type": "ndcg_at_3", "value": 41.955}, {"type": "ndcg_at_5", "value": 46.558}, {"type": "precision_at_1", "value": 26.814}, {"type": "precision_at_10", "value": 7.922999999999999}, {"type": "precision_at_100", "value": 0.9780000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 17.71}, {"type": "precision_at_5", "value": 12.859000000000002}, {"type": "recall_at_1", "value": 26.814}, {"type": "recall_at_10", "value": 79.232}, {"type": "recall_at_100", "value": 97.795}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 53.129000000000005}, {"type": "recall_at_5", "value": 64.29599999999999}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 44.56933066536439}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 40.47647746165173}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 59.65675531567043}, {"type": "mrr", "value": 72.95255683067317}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.83147014162338}, {"type": "cos_sim_spearman", "value": 85.1031439521441}, {"type": "euclidean_pearson", "value": 83.53609085510973}, {"type": "euclidean_spearman", "value": 84.59650590202833}, {"type": "manhattan_pearson", "value": 83.14611947586386}, {"type": "manhattan_spearman", "value": 84.13384475757064}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 83.32792207792208}, {"type": "f1", "value": 83.32037485050513}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 36.18605446588703}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 32.72379130181917}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 30.659}, {"type": "map_at_10", "value": 40.333999999999996}, {"type": "map_at_100", "value": 41.763}, {"type": "map_at_1000", "value": 41.894}, {"type": "map_at_3", "value": 37.561}, {"type": "map_at_5", "value": 39.084}, {"type": "mrr_at_1", "value": 37.482}, {"type": "mrr_at_10", "value": 45.736}, {"type": "mrr_at_100", "value": 46.591}, {"type": "mrr_at_1000", "value": 46.644999999999996}, {"type": "mrr_at_3", "value": 43.491}, {"type": "mrr_at_5", "value": 44.75}, {"type": "ndcg_at_1", "value": 37.482}, {"type": "ndcg_at_10", "value": 45.606}, {"type": "ndcg_at_100", "value": 51.172}, {"type": "ndcg_at_1000", "value": 53.407000000000004}, {"type": "ndcg_at_3", "value": 41.808}, {"type": "ndcg_at_5", "value": 43.449}, {"type": "precision_at_1", "value": 37.482}, {"type": "precision_at_10", "value": 8.254999999999999}, {"type": "precision_at_100", "value": 1.3719999999999999}, {"type": "precision_at_1000", "value": 0.186}, {"type": "precision_at_3", "value": 19.695}, {"type": "precision_at_5", "value": 13.847999999999999}, {"type": "recall_at_1", "value": 30.659}, {"type": "recall_at_10", "value": 55.409}, {"type": "recall_at_100", "value": 78.687}, {"type": "recall_at_1000", "value": 93.068}, {"type": "recall_at_3", "value": 43.891999999999996}, {"type": "recall_at_5", "value": 48.678}, {"type": "map_at_1", "value": 30.977}, {"type": "map_at_10", "value": 40.296}, {"type": "map_at_100", "value": 41.453}, {"type": "map_at_1000", "value": 41.581}, {"type": "map_at_3", "value": 37.619}, {"type": "map_at_5", "value": 39.181}, {"type": "mrr_at_1", "value": 39.108}, {"type": "mrr_at_10", "value": 46.894000000000005}, {"type": "mrr_at_100", "value": 47.55}, {"type": "mrr_at_1000", "value": 47.598}, {"type": "mrr_at_3", "value": 44.766}, {"type": "mrr_at_5", "value": 46.062999999999995}, {"type": "ndcg_at_1", "value": 39.108}, {"type": "ndcg_at_10", "value": 45.717}, {"type": "ndcg_at_100", "value": 49.941}, {"type": "ndcg_at_1000", "value": 52.138}, {"type": "ndcg_at_3", "value": 42.05}, {"type": "ndcg_at_5", "value": 43.893}, {"type": "precision_at_1", "value": 39.108}, {"type": "precision_at_10", "value": 8.306}, {"type": "precision_at_100", "value": 1.3419999999999999}, {"type": "precision_at_1000", "value": 0.184}, {"type": "precision_at_3", "value": 19.979}, {"type": "precision_at_5", "value": 14.038}, {"type": "recall_at_1", "value": 30.977}, {"type": "recall_at_10", "value": 54.688}, {"type": "recall_at_100", "value": 72.556}, {"type": "recall_at_1000", "value": 86.53800000000001}, {"type": "recall_at_3", "value": 43.388}, {"type": "recall_at_5", "value": 48.717}, {"type": "map_at_1", "value": 39.812}, {"type": "map_at_10", "value": 50.1}, {"type": "map_at_100", "value": 51.193999999999996}, {"type": "map_at_1000", "value": 51.258}, {"type": "map_at_3", "value": 47.510999999999996}, {"type": "map_at_5", "value": 48.891}, {"type": "mrr_at_1", "value": 45.266}, {"type": "mrr_at_10", "value": 53.459999999999994}, {"type": "mrr_at_100", "value": 54.19199999999999}, {"type": "mrr_at_1000", "value": 54.228}, {"type": "mrr_at_3", "value": 51.296}, {"type": "mrr_at_5", "value": 52.495999999999995}, {"type": "ndcg_at_1", "value": 45.266}, {"type": "ndcg_at_10", "value": 55.034000000000006}, {"type": "ndcg_at_100", "value": 59.458}, {"type": "ndcg_at_1000", "value": 60.862}, {"type": "ndcg_at_3", "value": 50.52799999999999}, {"type": "ndcg_at_5", "value": 52.564}, {"type": "precision_at_1", "value": 45.266}, {"type": "precision_at_10", "value": 8.483}, {"type": "precision_at_100", "value": 1.162}, {"type": "precision_at_1000", "value": 0.133}, {"type": "precision_at_3", "value": 21.944}, {"type": "precision_at_5", "value": 14.721}, {"type": "recall_at_1", "value": 39.812}, {"type": "recall_at_10", "value": 66.36}, {"type": "recall_at_100", "value": 85.392}, {"type": "recall_at_1000", "value": 95.523}, {"type": "recall_at_3", "value": 54.127}, {"type": "recall_at_5", "value": 59.245000000000005}, {"type": "map_at_1", "value": 26.186}, {"type": "map_at_10", "value": 33.18}, {"type": "map_at_100", "value": 34.052}, {"type": "map_at_1000", "value": 34.149}, {"type": "map_at_3", "value": 31.029}, {"type": "map_at_5", "value": 32.321}, {"type": "mrr_at_1", "value": 28.136}, {"type": "mrr_at_10", "value": 35.195}, {"type": "mrr_at_100", "value": 35.996}, {"type": "mrr_at_1000", "value": 36.076}, {"type": "mrr_at_3", "value": 33.051}, {"type": "mrr_at_5", "value": 34.407}, {"type": "ndcg_at_1", "value": 28.136}, {"type": "ndcg_at_10", "value": 37.275999999999996}, {"type": "ndcg_at_100", "value": 41.935}, {"type": "ndcg_at_1000", "value": 44.389}, {"type": "ndcg_at_3", "value": 33.059}, {"type": "ndcg_at_5", "value": 35.313}, {"type": "precision_at_1", "value": 28.136}, {"type": "precision_at_10", "value": 5.457999999999999}, {"type": "precision_at_100", "value": 0.826}, {"type": "precision_at_1000", "value": 0.107}, {"type": "precision_at_3", "value": 13.522}, {"type": "precision_at_5", "value": 9.424000000000001}, {"type": "recall_at_1", "value": 26.186}, {"type": "recall_at_10", "value": 47.961999999999996}, {"type": "recall_at_100", "value": 70.072}, {"type": "recall_at_1000", "value": 88.505}, {"type": "recall_at_3", "value": 36.752}, {"type": "recall_at_5", "value": 42.168}, {"type": "map_at_1", "value": 16.586000000000002}, {"type": "map_at_10", "value": 23.637}, {"type": "map_at_100", "value": 24.82}, {"type": "map_at_1000", "value": 24.95}, {"type": "map_at_3", "value": 21.428}, {"type": "map_at_5", "value": 22.555}, {"type": "mrr_at_1", "value": 20.771}, {"type": "mrr_at_10", "value": 27.839999999999996}, {"type": "mrr_at_100", "value": 28.887}, {"type": "mrr_at_1000", "value": 28.967}, {"type": "mrr_at_3", "value": 25.56}, {"type": "mrr_at_5", "value": 26.723000000000003}, {"type": "ndcg_at_1", "value": 20.771}, {"type": "ndcg_at_10", "value": 28.255000000000003}, {"type": "ndcg_at_100", "value": 33.886}, {"type": "ndcg_at_1000", "value": 36.963}, {"type": "ndcg_at_3", "value": 24.056}, {"type": "ndcg_at_5", "value": 25.818}, {"type": "precision_at_1", "value": 20.771}, {"type": "precision_at_10", "value": 5.1}, {"type": "precision_at_100", "value": 0.9119999999999999}, {"type": "precision_at_1000", "value": 0.132}, {"type": "precision_at_3", "value": 11.526}, {"type": "precision_at_5", "value": 8.158999999999999}, {"type": "recall_at_1", "value": 16.586000000000002}, {"type": "recall_at_10", "value": 38.456}, {"type": "recall_at_100", "value": 62.666}, {"type": "recall_at_1000", "value": 84.47}, {"type": "recall_at_3", "value": 26.765}, {"type": "recall_at_5", "value": 31.297000000000004}, {"type": "map_at_1", "value": 28.831}, {"type": "map_at_10", "value": 37.545}, {"type": "map_at_100", "value": 38.934999999999995}, {"type": "map_at_1000", "value": 39.044000000000004}, {"type": "map_at_3", "value": 34.601}, {"type": "map_at_5", "value": 36.302}, {"type": "mrr_at_1", "value": 34.264}, {"type": "mrr_at_10", "value": 42.569}, {"type": "mrr_at_100", "value": 43.514}, {"type": "mrr_at_1000", "value": 43.561}, {"type": "mrr_at_3", "value": 40.167}, {"type": "mrr_at_5", "value": 41.678}, {"type": "ndcg_at_1", "value": 34.264}, {"type": "ndcg_at_10", "value": 42.914}, {"type": "ndcg_at_100", "value": 48.931999999999995}, {"type": "ndcg_at_1000", "value": 51.004000000000005}, {"type": "ndcg_at_3", "value": 38.096999999999994}, {"type": "ndcg_at_5", "value": 40.509}, {"type": "precision_at_1", "value": 34.264}, {"type": "precision_at_10", "value": 7.642}, {"type": "precision_at_100", "value": 1.258}, {"type": "precision_at_1000", "value": 0.161}, {"type": "precision_at_3", "value": 17.453}, {"type": "precision_at_5", "value": 12.608}, {"type": "recall_at_1", "value": 28.831}, {"type": "recall_at_10", "value": 53.56999999999999}, {"type": "recall_at_100", "value": 79.26100000000001}, {"type": "recall_at_1000", "value": 92.862}, {"type": "recall_at_3", "value": 40.681}, {"type": "recall_at_5", "value": 46.597}, {"type": "map_at_1", "value": 27.461000000000002}, {"type": "map_at_10", "value": 35.885}, {"type": "map_at_100", "value": 37.039}, {"type": "map_at_1000", "value": 37.16}, {"type": "map_at_3", "value": 33.451}, {"type": "map_at_5", "value": 34.807}, {"type": "mrr_at_1", "value": 34.018}, {"type": "mrr_at_10", "value": 41.32}, {"type": "mrr_at_100", "value": 42.157}, {"type": "mrr_at_1000", "value": 42.223}, {"type": "mrr_at_3", "value": 39.288000000000004}, {"type": "mrr_at_5", "value": 40.481}, {"type": "ndcg_at_1", "value": 34.018}, {"type": "ndcg_at_10", "value": 40.821000000000005}, {"type": "ndcg_at_100", "value": 46.053}, {"type": "ndcg_at_1000", "value": 48.673}, {"type": "ndcg_at_3", "value": 36.839}, {"type": "ndcg_at_5", "value": 38.683}, {"type": "precision_at_1", "value": 34.018}, {"type": "precision_at_10", "value": 7.009}, {"type": "precision_at_100", "value": 1.123}, {"type": "precision_at_1000", "value": 0.153}, {"type": "precision_at_3", "value": 16.933}, {"type": "precision_at_5", "value": 11.826}, {"type": "recall_at_1", "value": 27.461000000000002}, {"type": "recall_at_10", "value": 50.285000000000004}, {"type": "recall_at_100", "value": 73.25500000000001}, {"type": "recall_at_1000", "value": 91.17699999999999}, {"type": "recall_at_3", "value": 39.104}, {"type": "recall_at_5", "value": 43.968}, {"type": "map_at_1", "value": 26.980083333333337}, {"type": "map_at_10", "value": 34.47208333333333}, {"type": "map_at_100", "value": 35.609249999999996}, {"type": "map_at_1000", "value": 35.72833333333333}, {"type": "map_at_3", "value": 32.189416666666666}, {"type": "map_at_5", "value": 33.44683333333334}, {"type": "mrr_at_1", "value": 31.731666666666662}, {"type": "mrr_at_10", "value": 38.518}, {"type": "mrr_at_100", "value": 39.38166666666667}, {"type": "mrr_at_1000", "value": 39.446999999999996}, {"type": "mrr_at_3", "value": 36.49966666666668}, {"type": "mrr_at_5", "value": 37.639916666666664}, {"type": "ndcg_at_1", "value": 31.731666666666662}, {"type": "ndcg_at_10", "value": 38.92033333333333}, {"type": "ndcg_at_100", "value": 44.01675}, {"type": "ndcg_at_1000", "value": 46.51075}, {"type": "ndcg_at_3", "value": 35.09766666666667}, {"type": "ndcg_at_5", "value": 36.842999999999996}, {"type": "precision_at_1", "value": 31.731666666666662}, {"type": "precision_at_10", "value": 6.472583333333332}, {"type": "precision_at_100", "value": 1.0665}, {"type": "precision_at_1000", "value": 0.14725000000000002}, {"type": "precision_at_3", "value": 15.659083333333331}, {"type": "precision_at_5", "value": 10.878833333333333}, {"type": "recall_at_1", "value": 26.980083333333337}, {"type": "recall_at_10", "value": 48.13925}, {"type": "recall_at_100", "value": 70.70149999999998}, {"type": "recall_at_1000", "value": 88.10775000000001}, {"type": "recall_at_3", "value": 37.30091666666667}, {"type": "recall_at_5", "value": 41.90358333333333}, {"type": "map_at_1", "value": 25.607999999999997}, {"type": "map_at_10", "value": 30.523}, {"type": "map_at_100", "value": 31.409}, {"type": "map_at_1000", "value": 31.507}, {"type": "map_at_3", "value": 28.915000000000003}, {"type": "map_at_5", "value": 29.756}, {"type": "mrr_at_1", "value": 28.681}, {"type": "mrr_at_10", "value": 33.409}, {"type": "mrr_at_100", "value": 34.241}, {"type": "mrr_at_1000", "value": 34.313}, {"type": "mrr_at_3", "value": 32.029999999999994}, {"type": "mrr_at_5", "value": 32.712}, {"type": "ndcg_at_1", "value": 28.681}, {"type": "ndcg_at_10", "value": 33.733000000000004}, {"type": "ndcg_at_100", "value": 38.32}, {"type": "ndcg_at_1000", "value": 40.937}, {"type": "ndcg_at_3", "value": 30.898999999999997}, {"type": "ndcg_at_5", "value": 32.088}, {"type": "precision_at_1", "value": 28.681}, {"type": "precision_at_10", "value": 4.968999999999999}, {"type": "precision_at_100", "value": 0.79}, {"type": "precision_at_1000", "value": 0.11}, {"type": "precision_at_3", "value": 12.73}, {"type": "precision_at_5", "value": 8.558}, {"type": "recall_at_1", "value": 25.607999999999997}, {"type": "recall_at_10", "value": 40.722}, {"type": "recall_at_100", "value": 61.956999999999994}, {"type": "recall_at_1000", "value": 81.43}, {"type": "recall_at_3", "value": 32.785}, {"type": "recall_at_5", "value": 35.855}, {"type": "map_at_1", "value": 20.399}, {"type": "map_at_10", "value": 25.968000000000004}, {"type": "map_at_100", "value": 26.985999999999997}, {"type": "map_at_1000", "value": 27.105}, {"type": "map_at_3", "value": 24.215}, {"type": "map_at_5", "value": 25.157}, {"type": "mrr_at_1", "value": 24.708}, {"type": "mrr_at_10", "value": 29.971999999999998}, {"type": "mrr_at_100", "value": 30.858}, {"type": "mrr_at_1000", "value": 30.934}, {"type": "mrr_at_3", "value": 28.304000000000002}, {"type": "mrr_at_5", "value": 29.183999999999997}, {"type": "ndcg_at_1", "value": 24.708}, {"type": "ndcg_at_10", "value": 29.676000000000002}, {"type": "ndcg_at_100", "value": 34.656}, {"type": "ndcg_at_1000", "value": 37.588}, {"type": "ndcg_at_3", "value": 26.613}, {"type": "ndcg_at_5", "value": 27.919}, {"type": "precision_at_1", "value": 24.708}, {"type": "precision_at_10", "value": 5.01}, {"type": "precision_at_100", "value": 0.876}, {"type": "precision_at_1000", "value": 0.13}, {"type": "precision_at_3", "value": 11.975}, {"type": "precision_at_5", "value": 8.279}, {"type": "recall_at_1", "value": 20.399}, {"type": "recall_at_10", "value": 36.935}, {"type": "recall_at_100", "value": 59.532}, {"type": "recall_at_1000", "value": 80.58}, {"type": "recall_at_3", "value": 27.979}, {"type": "recall_at_5", "value": 31.636999999999997}, {"type": "map_at_1", "value": 27.606}, {"type": "map_at_10", "value": 34.213}, {"type": "map_at_100", "value": 35.339999999999996}, {"type": "map_at_1000", "value": 35.458}, {"type": "map_at_3", "value": 31.987}, {"type": "map_at_5", "value": 33.322}, {"type": "mrr_at_1", "value": 31.53}, {"type": "mrr_at_10", "value": 37.911}, {"type": "mrr_at_100", "value": 38.879000000000005}, {"type": "mrr_at_1000", "value": 38.956}, {"type": "mrr_at_3", "value": 35.868}, {"type": "mrr_at_5", "value": 37.047999999999995}, {"type": "ndcg_at_1", "value": 31.53}, {"type": "ndcg_at_10", "value": 38.312000000000005}, {"type": "ndcg_at_100", "value": 43.812}, {"type": "ndcg_at_1000", "value": 46.414}, {"type": "ndcg_at_3", "value": 34.319}, {"type": "ndcg_at_5", "value": 36.312}, {"type": "precision_at_1", "value": 31.53}, {"type": "precision_at_10", "value": 5.970000000000001}, {"type": "precision_at_100", "value": 0.9939999999999999}, {"type": "precision_at_1000", "value": 0.133}, {"type": "precision_at_3", "value": 14.738999999999999}, {"type": "precision_at_5", "value": 10.242999999999999}, {"type": "recall_at_1", "value": 27.606}, {"type": "recall_at_10", "value": 47.136}, {"type": "recall_at_100", "value": 71.253}, {"type": "recall_at_1000", "value": 89.39399999999999}, {"type": "recall_at_3", "value": 36.342}, {"type": "recall_at_5", "value": 41.388999999999996}, {"type": "map_at_1", "value": 24.855}, {"type": "map_at_10", "value": 31.963}, {"type": "map_at_100", "value": 33.371}, {"type": "map_at_1000", "value": 33.584}, {"type": "map_at_3", "value": 29.543999999999997}, {"type": "map_at_5", "value": 30.793}, {"type": "mrr_at_1", "value": 29.644}, {"type": "mrr_at_10", "value": 35.601}, {"type": "mrr_at_100", "value": 36.551}, {"type": "mrr_at_1000", "value": 36.623}, {"type": "mrr_at_3", "value": 33.399}, {"type": "mrr_at_5", "value": 34.575}, {"type": "ndcg_at_1", "value": 29.644}, {"type": "ndcg_at_10", "value": 36.521}, {"type": "ndcg_at_100", "value": 42.087}, {"type": "ndcg_at_1000", "value": 45.119}, {"type": "ndcg_at_3", "value": 32.797}, {"type": "ndcg_at_5", "value": 34.208}, {"type": "precision_at_1", "value": 29.644}, {"type": "precision_at_10", "value": 6.7}, {"type": "precision_at_100", "value": 1.374}, {"type": "precision_at_1000", "value": 0.22899999999999998}, {"type": "precision_at_3", "value": 15.152}, {"type": "precision_at_5", "value": 10.671999999999999}, {"type": "recall_at_1", "value": 24.855}, {"type": "recall_at_10", "value": 45.449}, {"type": "recall_at_100", "value": 70.921}, {"type": "recall_at_1000", "value": 90.629}, {"type": "recall_at_3", "value": 33.526}, {"type": "recall_at_5", "value": 37.848}, {"type": "map_at_1", "value": 24.781}, {"type": "map_at_10", "value": 30.020999999999997}, {"type": "map_at_100", "value": 30.948999999999998}, {"type": "map_at_1000", "value": 31.05}, {"type": "map_at_3", "value": 28.412}, {"type": "map_at_5", "value": 29.193}, {"type": "mrr_at_1", "value": 27.172}, {"type": "mrr_at_10", "value": 32.309}, {"type": "mrr_at_100", "value": 33.164}, {"type": "mrr_at_1000", "value": 33.239999999999995}, {"type": "mrr_at_3", "value": 30.775999999999996}, {"type": "mrr_at_5", "value": 31.562}, {"type": "ndcg_at_1", "value": 27.172}, {"type": "ndcg_at_10", "value": 33.178999999999995}, {"type": "ndcg_at_100", "value": 37.949}, {"type": "ndcg_at_1000", "value": 40.635}, {"type": "ndcg_at_3", "value": 30.107}, {"type": "ndcg_at_5", "value": 31.36}, {"type": "precision_at_1", "value": 27.172}, {"type": "precision_at_10", "value": 4.769}, {"type": "precision_at_100", "value": 0.769}, {"type": "precision_at_1000", "value": 0.109}, {"type": "precision_at_3", "value": 12.261}, {"type": "precision_at_5", "value": 8.17}, {"type": "recall_at_1", "value": 24.781}, {"type": "recall_at_10", "value": 40.699000000000005}, {"type": "recall_at_100", "value": 62.866}, {"type": "recall_at_1000", "value": 83.11699999999999}, {"type": "recall_at_3", "value": 32.269999999999996}, {"type": "recall_at_5", "value": 35.443999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.2139999999999995}, {"type": "map_at_10", "value": 9.986}, {"type": "map_at_100", "value": 11.343}, {"type": "map_at_1000", "value": 11.55}, {"type": "map_at_3", "value": 7.961}, {"type": "map_at_5", "value": 8.967}, {"type": "mrr_at_1", "value": 12.052}, {"type": "mrr_at_10", "value": 20.165}, {"type": "mrr_at_100", "value": 21.317}, {"type": "mrr_at_1000", "value": 21.399}, {"type": "mrr_at_3", "value": 17.079}, {"type": "mrr_at_5", "value": 18.695}, {"type": "ndcg_at_1", "value": 12.052}, {"type": "ndcg_at_10", "value": 15.375}, {"type": "ndcg_at_100", "value": 21.858}, {"type": "ndcg_at_1000", "value": 26.145000000000003}, {"type": "ndcg_at_3", "value": 11.334}, {"type": "ndcg_at_5", "value": 12.798000000000002}, {"type": "precision_at_1", "value": 12.052}, {"type": "precision_at_10", "value": 5.16}, {"type": "precision_at_100", "value": 1.206}, {"type": "precision_at_1000", "value": 0.198}, {"type": "precision_at_3", "value": 8.73}, {"type": "precision_at_5", "value": 7.114}, {"type": "recall_at_1", "value": 5.2139999999999995}, {"type": "recall_at_10", "value": 20.669999999999998}, {"type": "recall_at_100", "value": 43.901}, {"type": "recall_at_1000", "value": 68.447}, {"type": "recall_at_3", "value": 11.049000000000001}, {"type": "recall_at_5", "value": 14.652999999999999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 8.511000000000001}, {"type": "map_at_10", "value": 19.503}, {"type": "map_at_100", "value": 27.46}, {"type": "map_at_1000", "value": 29.187}, {"type": "map_at_3", "value": 14.030999999999999}, {"type": "map_at_5", "value": 16.329}, {"type": "mrr_at_1", "value": 63.74999999999999}, {"type": "mrr_at_10", "value": 73.419}, {"type": "mrr_at_100", "value": 73.691}, {"type": "mrr_at_1000", "value": 73.697}, {"type": "mrr_at_3", "value": 71.792}, {"type": "mrr_at_5", "value": 72.979}, {"type": "ndcg_at_1", "value": 53.125}, {"type": "ndcg_at_10", "value": 41.02}, {"type": "ndcg_at_100", "value": 45.407}, {"type": "ndcg_at_1000", "value": 52.68000000000001}, {"type": "ndcg_at_3", "value": 46.088}, {"type": "ndcg_at_5", "value": 43.236000000000004}, {"type": "precision_at_1", "value": 63.74999999999999}, {"type": "precision_at_10", "value": 32.35}, {"type": "precision_at_100", "value": 10.363}, {"type": "precision_at_1000", "value": 2.18}, {"type": "precision_at_3", "value": 49.667}, {"type": "precision_at_5", "value": 41.5}, {"type": "recall_at_1", "value": 8.511000000000001}, {"type": "recall_at_10", "value": 24.851}, {"type": "recall_at_100", "value": 50.745}, {"type": "recall_at_1000", "value": 73.265}, {"type": "recall_at_3", "value": 15.716}, {"type": "recall_at_5", "value": 19.256}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 49.43500000000001}, {"type": "f1", "value": 44.56288273966374}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 40.858}, {"type": "map_at_10", "value": 52.276}, {"type": "map_at_100", "value": 52.928}, {"type": "map_at_1000", "value": 52.966}, {"type": "map_at_3", "value": 49.729}, {"type": "map_at_5", "value": 51.27}, {"type": "mrr_at_1", "value": 43.624}, {"type": "mrr_at_10", "value": 55.22899999999999}, {"type": "mrr_at_100", "value": 55.823}, {"type": "mrr_at_1000", "value": 55.85}, {"type": "mrr_at_3", "value": 52.739999999999995}, {"type": "mrr_at_5", "value": 54.251000000000005}, {"type": "ndcg_at_1", "value": 43.624}, {"type": "ndcg_at_10", "value": 58.23500000000001}, {"type": "ndcg_at_100", "value": 61.315}, {"type": "ndcg_at_1000", "value": 62.20099999999999}, {"type": "ndcg_at_3", "value": 53.22}, {"type": "ndcg_at_5", "value": 55.88999999999999}, {"type": "precision_at_1", "value": 43.624}, {"type": "precision_at_10", "value": 8.068999999999999}, {"type": "precision_at_100", "value": 0.975}, {"type": "precision_at_1000", "value": 0.107}, {"type": "precision_at_3", "value": 21.752}, {"type": "precision_at_5", "value": 14.515}, {"type": "recall_at_1", "value": 40.858}, {"type": "recall_at_10", "value": 73.744}, {"type": "recall_at_100", "value": 87.667}, {"type": "recall_at_1000", "value": 94.15599999999999}, {"type": "recall_at_3", "value": 60.287}, {"type": "recall_at_5", "value": 66.703}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 17.864}, {"type": "map_at_10", "value": 28.592000000000002}, {"type": "map_at_100", "value": 30.165}, {"type": "map_at_1000", "value": 30.364}, {"type": "map_at_3", "value": 24.586}, {"type": "map_at_5", "value": 26.717000000000002}, {"type": "mrr_at_1", "value": 35.031}, {"type": "mrr_at_10", "value": 43.876}, {"type": "mrr_at_100", "value": 44.683}, {"type": "mrr_at_1000", "value": 44.736}, {"type": "mrr_at_3", "value": 40.998000000000005}, {"type": "mrr_at_5", "value": 42.595}, {"type": "ndcg_at_1", "value": 35.031}, {"type": "ndcg_at_10", "value": 36.368}, {"type": "ndcg_at_100", "value": 42.472}, {"type": "ndcg_at_1000", "value": 45.973000000000006}, {"type": "ndcg_at_3", "value": 31.915}, {"type": "ndcg_at_5", "value": 33.394}, {"type": "precision_at_1", "value": 35.031}, {"type": "precision_at_10", "value": 10.139}, {"type": "precision_at_100", "value": 1.6420000000000001}, {"type": "precision_at_1000", "value": 0.22699999999999998}, {"type": "precision_at_3", "value": 21.142}, {"type": "precision_at_5", "value": 15.772}, {"type": "recall_at_1", "value": 17.864}, {"type": "recall_at_10", "value": 43.991}, {"type": "recall_at_100", "value": 66.796}, {"type": "recall_at_1000", "value": 87.64}, {"type": "recall_at_3", "value": 28.915999999999997}, {"type": "recall_at_5", "value": 35.185}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 36.556}, {"type": "map_at_10", "value": 53.056000000000004}, {"type": "map_at_100", "value": 53.909}, {"type": "map_at_1000", "value": 53.98}, {"type": "map_at_3", "value": 49.982}, {"type": "map_at_5", "value": 51.9}, {"type": "mrr_at_1", "value": 73.113}, {"type": "mrr_at_10", "value": 79.381}, {"type": "mrr_at_100", "value": 79.60300000000001}, {"type": "mrr_at_1000", "value": 79.617}, {"type": "mrr_at_3", "value": 78.298}, {"type": "mrr_at_5", "value": 78.995}, {"type": "ndcg_at_1", "value": 73.113}, {"type": "ndcg_at_10", "value": 62.21}, {"type": "ndcg_at_100", "value": 65.242}, {"type": "ndcg_at_1000", "value": 66.667}, {"type": "ndcg_at_3", "value": 57.717}, {"type": "ndcg_at_5", "value": 60.224}, {"type": "precision_at_1", "value": 73.113}, {"type": "precision_at_10", "value": 12.842999999999998}, {"type": "precision_at_100", "value": 1.522}, {"type": "precision_at_1000", "value": 0.17099999999999999}, {"type": "precision_at_3", "value": 36.178}, {"type": "precision_at_5", "value": 23.695}, {"type": "recall_at_1", "value": 36.556}, {"type": "recall_at_10", "value": 64.213}, {"type": "recall_at_100", "value": 76.077}, {"type": "recall_at_1000", "value": 85.53699999999999}, {"type": "recall_at_3", "value": 54.266999999999996}, {"type": "recall_at_5", "value": 59.236999999999995}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 75.958}, {"type": "ap", "value": 69.82869527654348}, {"type": "f1", "value": 75.89120903005633}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 23.608}, {"type": "map_at_10", "value": 36.144}, {"type": "map_at_100", "value": 37.244}, {"type": "map_at_1000", "value": 37.291999999999994}, {"type": "map_at_3", "value": 32.287}, {"type": "map_at_5", "value": 34.473}, {"type": "mrr_at_1", "value": 24.226}, {"type": "mrr_at_10", "value": 36.711}, {"type": "mrr_at_100", "value": 37.758}, {"type": "mrr_at_1000", "value": 37.8}, {"type": "mrr_at_3", "value": 32.92}, {"type": "mrr_at_5", "value": 35.104}, {"type": "ndcg_at_1", "value": 24.269}, {"type": "ndcg_at_10", "value": 43.138}, {"type": "ndcg_at_100", "value": 48.421}, {"type": "ndcg_at_1000", "value": 49.592000000000006}, {"type": "ndcg_at_3", "value": 35.269}, {"type": "ndcg_at_5", "value": 39.175}, {"type": "precision_at_1", "value": 24.269}, {"type": "precision_at_10", "value": 6.755999999999999}, {"type": "precision_at_100", "value": 0.941}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 14.938}, {"type": "precision_at_5", "value": 10.934000000000001}, {"type": "recall_at_1", "value": 23.608}, {"type": "recall_at_10", "value": 64.679}, {"type": "recall_at_100", "value": 89.027}, {"type": "recall_at_1000", "value": 97.91}, {"type": "recall_at_3", "value": 43.25}, {"type": "recall_at_5", "value": 52.617000000000004}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 93.21477428180576}, {"type": "f1", "value": 92.92502305092152}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 74.76744186046511}, {"type": "f1", "value": 59.19855520057899}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 72.24613315400134}, {"type": "f1", "value": 70.19950395651232}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 76.75857431069268}, {"type": "f1", "value": 76.5433450230191}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 31.525463791623604}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 28.28695907385136}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 30.068174046665224}, {"type": "mrr", "value": 30.827586642840803}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.322}, {"type": "map_at_10", "value": 13.919999999999998}, {"type": "map_at_100", "value": 17.416}, {"type": "map_at_1000", "value": 18.836}, {"type": "map_at_3", "value": 10.111}, {"type": "map_at_5", "value": 11.991999999999999}, {"type": "mrr_at_1", "value": 48.297000000000004}, {"type": "mrr_at_10", "value": 57.114}, {"type": "mrr_at_100", "value": 57.713}, {"type": "mrr_at_1000", "value": 57.751}, {"type": "mrr_at_3", "value": 55.108000000000004}, {"type": "mrr_at_5", "value": 56.533}, {"type": "ndcg_at_1", "value": 46.44}, {"type": "ndcg_at_10", "value": 36.589}, {"type": "ndcg_at_100", "value": 33.202}, {"type": "ndcg_at_1000", "value": 41.668}, {"type": "ndcg_at_3", "value": 41.302}, {"type": "ndcg_at_5", "value": 39.829}, {"type": "precision_at_1", "value": 47.988}, {"type": "precision_at_10", "value": 27.059}, {"type": "precision_at_100", "value": 8.235000000000001}, {"type": "precision_at_1000", "value": 2.091}, {"type": "precision_at_3", "value": 38.184000000000005}, {"type": "precision_at_5", "value": 34.365}, {"type": "recall_at_1", "value": 6.322}, {"type": "recall_at_10", "value": 18.288}, {"type": "recall_at_100", "value": 32.580999999999996}, {"type": "recall_at_1000", "value": 63.605999999999995}, {"type": "recall_at_3", "value": 11.266}, {"type": "recall_at_5", "value": 14.69}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 36.586999999999996}, {"type": "map_at_10", "value": 52.464}, {"type": "map_at_100", "value": 53.384}, {"type": "map_at_1000", "value": 53.405}, {"type": "map_at_3", "value": 48.408}, {"type": "map_at_5", "value": 50.788999999999994}, {"type": "mrr_at_1", "value": 40.904}, {"type": "mrr_at_10", "value": 54.974000000000004}, {"type": "mrr_at_100", "value": 55.60699999999999}, {"type": "mrr_at_1000", "value": 55.623}, {"type": "mrr_at_3", "value": 51.73799999999999}, {"type": "mrr_at_5", "value": 53.638}, {"type": "ndcg_at_1", "value": 40.904}, {"type": "ndcg_at_10", "value": 59.965999999999994}, {"type": "ndcg_at_100", "value": 63.613}, {"type": "ndcg_at_1000", "value": 64.064}, {"type": "ndcg_at_3", "value": 52.486}, {"type": "ndcg_at_5", "value": 56.377}, {"type": "precision_at_1", "value": 40.904}, {"type": "precision_at_10", "value": 9.551}, {"type": "precision_at_100", "value": 1.162}, {"type": "precision_at_1000", "value": 0.12}, {"type": "precision_at_3", "value": 23.552}, {"type": "precision_at_5", "value": 16.436999999999998}, {"type": "recall_at_1", "value": 36.586999999999996}, {"type": "recall_at_10", "value": 80.094}, {"type": "recall_at_100", "value": 95.515}, {"type": "recall_at_1000", "value": 98.803}, {"type": "recall_at_3", "value": 60.907}, {"type": "recall_at_5", "value": 69.817}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.422}, {"type": "map_at_10", "value": 84.113}, {"type": "map_at_100", "value": 84.744}, {"type": "map_at_1000", "value": 84.762}, {"type": "map_at_3", "value": 81.171}, {"type": "map_at_5", "value": 83.039}, {"type": "mrr_at_1", "value": 81.12}, {"type": "mrr_at_10", "value": 87.277}, {"type": "mrr_at_100", "value": 87.384}, {"type": "mrr_at_1000", "value": 87.385}, {"type": "mrr_at_3", "value": 86.315}, {"type": "mrr_at_5", "value": 86.981}, {"type": "ndcg_at_1", "value": 81.12}, {"type": "ndcg_at_10", "value": 87.92}, {"type": "ndcg_at_100", "value": 89.178}, {"type": "ndcg_at_1000", "value": 89.29899999999999}, {"type": "ndcg_at_3", "value": 85.076}, {"type": "ndcg_at_5", "value": 86.67099999999999}, {"type": "precision_at_1", "value": 81.12}, {"type": "precision_at_10", "value": 13.325999999999999}, {"type": "precision_at_100", "value": 1.524}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.16}, {"type": "precision_at_5", "value": 24.456}, {"type": "recall_at_1", "value": 70.422}, {"type": "recall_at_10", "value": 95.00800000000001}, {"type": "recall_at_100", "value": 99.38}, {"type": "recall_at_1000", "value": 99.94800000000001}, {"type": "recall_at_3", "value": 86.809}, {"type": "recall_at_5", "value": 91.334}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 48.18491891699636}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 62.190639679711914}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.478}, {"type": "map_at_10", "value": 11.268}, {"type": "map_at_100", "value": 13.129}, {"type": "map_at_1000", "value": 13.41}, {"type": "map_at_3", "value": 8.103}, {"type": "map_at_5", "value": 9.609}, {"type": "mrr_at_1", "value": 22}, {"type": "mrr_at_10", "value": 32.248}, {"type": "mrr_at_100", "value": 33.355000000000004}, {"type": "mrr_at_1000", "value": 33.42}, {"type": "mrr_at_3", "value": 29.15}, {"type": "mrr_at_5", "value": 30.785}, {"type": "ndcg_at_1", "value": 22}, {"type": "ndcg_at_10", "value": 18.990000000000002}, {"type": "ndcg_at_100", "value": 26.302999999999997}, {"type": "ndcg_at_1000", "value": 31.537}, {"type": "ndcg_at_3", "value": 18.034}, {"type": "ndcg_at_5", "value": 15.655}, {"type": "precision_at_1", "value": 22}, {"type": "precision_at_10", "value": 9.91}, {"type": "precision_at_100", "value": 2.0420000000000003}, {"type": "precision_at_1000", "value": 0.33}, {"type": "precision_at_3", "value": 16.933}, {"type": "precision_at_5", "value": 13.719999999999999}, {"type": "recall_at_1", "value": 4.478}, {"type": "recall_at_10", "value": 20.087}, {"type": "recall_at_100", "value": 41.457}, {"type": "recall_at_1000", "value": 67.10199999999999}, {"type": "recall_at_3", "value": 10.313}, {"type": "recall_at_5", "value": 13.927999999999999}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.27341574565806}, {"type": "cos_sim_spearman", "value": 79.66419880841734}, {"type": "euclidean_pearson", "value": 81.32473321838208}, {"type": "euclidean_spearman", "value": 79.29828832085133}, {"type": "manhattan_pearson", "value": 81.25554065883132}, {"type": "manhattan_spearman", "value": 79.23275543279853}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.40468875905418}, {"type": "cos_sim_spearman", "value": 74.2189990321174}, {"type": "euclidean_pearson", "value": 80.74376966290956}, {"type": "euclidean_spearman", "value": 74.97663839079335}, {"type": "manhattan_pearson", "value": 80.69779331646207}, {"type": "manhattan_spearman", "value": 75.00225252917613}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.5745290053095}, {"type": "cos_sim_spearman", "value": 83.31401180333397}, {"type": "euclidean_pearson", "value": 82.96500607325534}, {"type": "euclidean_spearman", "value": 83.8534967935793}, {"type": "manhattan_pearson", "value": 82.83112050632508}, {"type": "manhattan_spearman", "value": 83.70877296557838}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.67833656607704}, {"type": "cos_sim_spearman", "value": 78.52252410630707}, {"type": "euclidean_pearson", "value": 80.071189514343}, {"type": "euclidean_spearman", "value": 78.95143545742796}, {"type": "manhattan_pearson", "value": 80.0128926165121}, {"type": "manhattan_spearman", "value": 78.91236678732628}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.48437639980746}, {"type": "cos_sim_spearman", "value": 88.34876527774259}, {"type": "euclidean_pearson", "value": 87.64898081823888}, {"type": "euclidean_spearman", "value": 88.58937180804213}, {"type": "manhattan_pearson", "value": 87.5942417815288}, {"type": "manhattan_spearman", "value": 88.53013922267687}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.69189187164781}, {"type": "cos_sim_spearman", "value": 84.15327883572112}, {"type": "euclidean_pearson", "value": 83.64202266685898}, {"type": "euclidean_spearman", "value": 84.6219602318862}, {"type": "manhattan_pearson", "value": 83.53256698709998}, {"type": "manhattan_spearman", "value": 84.49260712904946}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.09508017611589}, {"type": "cos_sim_spearman", "value": 87.23010990417097}, {"type": "euclidean_pearson", "value": 87.62545569077133}, {"type": "euclidean_spearman", "value": 86.71152051711714}, {"type": "manhattan_pearson", "value": 87.5057154278377}, {"type": "manhattan_spearman", "value": 86.60611898281267}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 61.72129893941176}, {"type": "cos_sim_spearman", "value": 62.87871412069194}, {"type": "euclidean_pearson", "value": 63.21077648290454}, {"type": "euclidean_spearman", "value": 63.03263080805978}, {"type": "manhattan_pearson", "value": 63.20740860135976}, {"type": "manhattan_spearman", "value": 62.89930471802817}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.039118236799}, {"type": "cos_sim_spearman", "value": 86.18102563389962}, {"type": "euclidean_pearson", "value": 85.62977041471879}, {"type": "euclidean_spearman", "value": 86.02478990544347}, {"type": "manhattan_pearson", "value": 85.60786740521806}, {"type": "manhattan_spearman", "value": 85.99546210442547}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 82.89875069737266}, {"type": "mrr", "value": 95.42621322033087}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 58.660999999999994}, {"type": "map_at_10", "value": 68.738}, {"type": "map_at_100", "value": 69.33200000000001}, {"type": "map_at_1000", "value": 69.352}, {"type": "map_at_3", "value": 66.502}, {"type": "map_at_5", "value": 67.686}, {"type": "mrr_at_1", "value": 61.667}, {"type": "mrr_at_10", "value": 70.003}, {"type": "mrr_at_100", "value": 70.441}, {"type": "mrr_at_1000", "value": 70.46}, {"type": "mrr_at_3", "value": 68.278}, {"type": "mrr_at_5", "value": 69.194}, {"type": "ndcg_at_1", "value": 61.667}, {"type": "ndcg_at_10", "value": 73.083}, {"type": "ndcg_at_100", "value": 75.56}, {"type": "ndcg_at_1000", "value": 76.01400000000001}, {"type": "ndcg_at_3", "value": 69.28699999999999}, {"type": "ndcg_at_5", "value": 70.85000000000001}, {"type": "precision_at_1", "value": 61.667}, {"type": "precision_at_10", "value": 9.6}, {"type": "precision_at_100", "value": 1.087}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 27.111}, {"type": "precision_at_5", "value": 17.467}, {"type": "recall_at_1", "value": 58.660999999999994}, {"type": "recall_at_10", "value": 85.02199999999999}, {"type": "recall_at_100", "value": 95.933}, {"type": "recall_at_1000", "value": 99.333}, {"type": "recall_at_3", "value": 74.506}, {"type": "recall_at_5", "value": 78.583}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.8029702970297}, {"type": "cos_sim_ap", "value": 94.87673936635738}, {"type": "cos_sim_f1", "value": 90.00502260170768}, {"type": "cos_sim_precision", "value": 90.41372351160445}, {"type": "cos_sim_recall", "value": 89.60000000000001}, {"type": "dot_accuracy", "value": 99.57524752475247}, {"type": "dot_ap", "value": 84.81717934496321}, {"type": "dot_f1", "value": 78.23026646556059}, {"type": "dot_precision", "value": 78.66531850353893}, {"type": "dot_recall", "value": 77.8}, {"type": "euclidean_accuracy", "value": 99.8029702970297}, {"type": "euclidean_ap", "value": 94.74658253135284}, {"type": "euclidean_f1", "value": 90.08470353761834}, {"type": "euclidean_precision", "value": 89.77159880834161}, {"type": "euclidean_recall", "value": 90.4}, {"type": "manhattan_accuracy", "value": 99.8}, {"type": "manhattan_ap", "value": 94.69224030742787}, {"type": "manhattan_f1", "value": 89.9502487562189}, {"type": "manhattan_precision", "value": 89.50495049504951}, {"type": "manhattan_recall", "value": 90.4}, {"type": "max_accuracy", "value": 99.8029702970297}, {"type": "max_ap", "value": 94.87673936635738}, {"type": "max_f1", "value": 90.08470353761834}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 63.906039623153035}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 32.56053830923281}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 50.15326538775145}, {"type": "mrr", "value": 50.99279295051355}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 31.44030762047337}, {"type": "cos_sim_spearman", "value": 31.00910300264562}, {"type": "dot_pearson", "value": 26.88257194766013}, {"type": "dot_spearman", "value": 27.646202679013577}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.247}, {"type": "map_at_10", "value": 1.9429999999999998}, {"type": "map_at_100", "value": 10.82}, {"type": "map_at_1000", "value": 25.972}, {"type": "map_at_3", "value": 0.653}, {"type": "map_at_5", "value": 1.057}, {"type": "mrr_at_1", "value": 94}, {"type": "mrr_at_10", "value": 96.333}, {"type": "mrr_at_100", "value": 96.333}, {"type": "mrr_at_1000", "value": 96.333}, {"type": "mrr_at_3", "value": 96.333}, {"type": "mrr_at_5", "value": 96.333}, {"type": "ndcg_at_1", "value": 89}, {"type": "ndcg_at_10", "value": 79.63799999999999}, {"type": "ndcg_at_100", "value": 57.961}, {"type": "ndcg_at_1000", "value": 50.733}, {"type": "ndcg_at_3", "value": 84.224}, {"type": "ndcg_at_5", "value": 82.528}, {"type": "precision_at_1", "value": 94}, {"type": "precision_at_10", "value": 84.2}, {"type": "precision_at_100", "value": 59.36}, {"type": "precision_at_1000", "value": 22.738}, {"type": "precision_at_3", "value": 88}, {"type": "precision_at_5", "value": 86.8}, {"type": "recall_at_1", "value": 0.247}, {"type": "recall_at_10", "value": 2.131}, {"type": "recall_at_100", "value": 14.035}, {"type": "recall_at_1000", "value": 47.457}, {"type": "recall_at_3", "value": 0.6779999999999999}, {"type": "recall_at_5", "value": 1.124}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.603}, {"type": "map_at_10", "value": 11.667}, {"type": "map_at_100", "value": 16.474}, {"type": "map_at_1000", "value": 18.074}, {"type": "map_at_3", "value": 6.03}, {"type": "map_at_5", "value": 8.067}, {"type": "mrr_at_1", "value": 34.694}, {"type": "mrr_at_10", "value": 51.063}, {"type": "mrr_at_100", "value": 51.908}, {"type": "mrr_at_1000", "value": 51.908}, {"type": "mrr_at_3", "value": 47.959}, {"type": "mrr_at_5", "value": 49.694}, {"type": "ndcg_at_1", "value": 32.653}, {"type": "ndcg_at_10", "value": 28.305000000000003}, {"type": "ndcg_at_100", "value": 35.311}, {"type": "ndcg_at_1000", "value": 47.644999999999996}, {"type": "ndcg_at_3", "value": 32.187}, {"type": "ndcg_at_5", "value": 29.134999999999998}, {"type": "precision_at_1", "value": 34.694}, {"type": "precision_at_10", "value": 26.122}, {"type": "precision_at_100", "value": 6.755}, {"type": "precision_at_1000", "value": 1.467}, {"type": "precision_at_3", "value": 34.694}, {"type": "precision_at_5", "value": 30.203999999999997}, {"type": "recall_at_1", "value": 2.603}, {"type": "recall_at_10", "value": 18.716}, {"type": "recall_at_100", "value": 42.512}, {"type": "recall_at_1000", "value": 79.32000000000001}, {"type": "recall_at_3", "value": 7.59}, {"type": "recall_at_5", "value": 10.949}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 74.117}, {"type": "ap", "value": 15.89357321699319}, {"type": "f1", "value": 57.14385866369257}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 61.38370118845502}, {"type": "f1", "value": 61.67038693866553}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 42.57754941537969}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.1775049174465}, {"type": "cos_sim_ap", "value": 74.3994879581554}, {"type": "cos_sim_f1", "value": 69.32903671308551}, {"type": "cos_sim_precision", "value": 61.48193508879363}, {"type": "cos_sim_recall", "value": 79.47229551451187}, {"type": "dot_accuracy", "value": 81.65345413363534}, {"type": "dot_ap", "value": 59.690898346685096}, {"type": "dot_f1", "value": 57.27622826467499}, {"type": "dot_precision", "value": 51.34965473948525}, {"type": "dot_recall", "value": 64.74934036939314}, {"type": "euclidean_accuracy", "value": 86.04637301066937}, {"type": "euclidean_ap", "value": 74.33009001775268}, {"type": "euclidean_f1", "value": 69.2458374142997}, {"type": "euclidean_precision", "value": 64.59570580173595}, {"type": "euclidean_recall", "value": 74.6174142480211}, {"type": "manhattan_accuracy", "value": 86.11193896405793}, {"type": "manhattan_ap", "value": 74.2964140130421}, {"type": "manhattan_f1", "value": 69.11601528788066}, {"type": "manhattan_precision", "value": 64.86924323073363}, {"type": "manhattan_recall", "value": 73.95778364116094}, {"type": "max_accuracy", "value": 86.1775049174465}, {"type": "max_ap", "value": 74.3994879581554}, {"type": "max_f1", "value": 69.32903671308551}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.01501921061823}, {"type": "cos_sim_ap", "value": 85.97819287477351}, {"type": "cos_sim_f1", "value": 78.33882858518875}, {"type": "cos_sim_precision", "value": 75.49446626204926}, {"type": "cos_sim_recall", "value": 81.40591315060055}, {"type": "dot_accuracy", "value": 86.47494857763806}, {"type": "dot_ap", "value": 78.77420360340282}, {"type": "dot_f1", "value": 73.06433247936238}, {"type": "dot_precision", "value": 67.92140777983595}, {"type": "dot_recall", "value": 79.04989220819218}, {"type": "euclidean_accuracy", "value": 88.7297706368611}, {"type": "euclidean_ap", "value": 85.61550568529317}, {"type": "euclidean_f1", "value": 77.84805525263539}, {"type": "euclidean_precision", "value": 73.73639994491117}, {"type": "euclidean_recall", "value": 82.44533415460425}, {"type": "manhattan_accuracy", "value": 88.75111576823068}, {"type": "manhattan_ap", "value": 85.58701671476263}, {"type": "manhattan_f1", "value": 77.70169909067856}, {"type": "manhattan_precision", "value": 73.37666780704755}, {"type": "manhattan_recall", "value": 82.5685247921158}, {"type": "max_accuracy", "value": 89.01501921061823}, {"type": "max_ap", "value": 85.97819287477351}, {"type": "max_f1", "value": 78.33882858518875}]}]}]}
dataset
null
573
HiTZ/GoLLIE-13B
HiTZ
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "code", "text-generation-inference", "Information Extraction", "IE", "Named Entity Recogniton", "Event Extraction", "Relation Extraction", "LLaMA", "custom_code", "en", "dataset:ACE05", "dataset:bc5cdr", "dataset:conll2003", "dataset:ncbi_disease", "dataset:conll2012_ontonotesv5", "dataset:rams", "dataset:tacred", "dataset:wnut_17", "arxiv:2310.03668", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-09-29T23:55:28Z
2023-10-20T07:13:36+00:00
92
7
--- datasets: - ACE05 - bc5cdr - conll2003 - ncbi_disease - conll2012_ontonotesv5 - rams - tacred - wnut_17 language: - en license: llama2 metrics: - f1 pipeline_tag: text-generation tags: - code - text-generation-inference - Information Extraction - IE - Named Entity Recogniton - Event Extraction - Relation Extraction - LLaMA --- <p align="center"> <br> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/GoLLIE.png" style="height: 250px;"> <h2 align="center"><b>G</b>uideline f<b>o</b>llowing <b>L</b>arge <b>L</b>anguage Model for <b>I</b>nformation <b>E</b>xtraction</h2> <br> # Model Card for GoLLIE 13B <p align="justify"> We present GoLLIE, a Large Language Model trained to follow annotation guidelines. GoLLIE outperforms previous approaches on zero-shot Information Extraction and allows the user to perform inferences with annotation schemas defined on the fly. Different from previous approaches, GoLLIE is able to follow detailed definitions and does not only rely on the knowledge already encoded in the LLM. - 💻 Code: [https://github.com/osainz59/CoLLIE/](https://github.com/hitz-zentroa/GoLLIE) - 📒 Blog Post: [GoLLIE: Guideline-following Large Language Model for Information Extraction](https://hitz-zentroa.github.io/GoLLIE/) - 📖 Paper: [GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction](https://arxiv.org/abs/2310.03668) - 🐕 GoLLIE Colection in the 🤗HuggingFace Hub: [HiTZ/gollie](https://huggingface.co/collections/HiTZ/gollie-651bf19ee315e8a224aacc4f) - 🚀 Example Jupyter Notebooks: [GoLLIE Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) </p> <p align="center"> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/zero_shot_results.png"> </p> ### Model Description - **Developed by:** [Oscar Sainz](https://osainz59.github.io/), [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Rodrigo Agerri](https://ragerri.github.io/), [Oier Lopez de Lacalle](https://oierldl.github.io/), [German Rigau](https://adimen.si.ehu.es/~rigau/) and [Eneko Agirre](https://eagirre.github.io/) - **Institution:** [HiTZ Basque Center for Language Technology](http://www.hitz.eus/) - [Ixa](https://www.ixa.eus/node/2?language=en), [University of the Basque Country UPV/EHU](https://www.ehu.eus/en/en-home) - **Model type:** Text Generation - **Language(s) (NLP):** English - **License:** LLaMA2 License for the base and merged model. Apache 2.0 for pre-trained LoRA Adapters - **Finetuned from model:** CODE-LLaMA2 ## Schema definition and inference example The labels are represented as Python classes, and the guidelines or instructions are introduced as docstrings. The model start generating after the `result = [` line. ```Python # Entity definitions @dataclass class Launcher(Template): """Refers to a vehicle designed primarily to transport payloads from the Earth's surface to space. Launchers can carry various payloads, including satellites, crewed spacecraft, and cargo, into various orbits or even beyond Earth's orbit. They are usually multi-stage vehicles that use rocket engines for propulsion.""" mention: str """ The name of the launcher vehicle. Such as: "Sturn V", "Atlas V", "Soyuz", "Ariane 5" """ space_company: str # The company that operates the launcher. Such as: "Blue origin", "ESA", "Boeing", "ISRO", "Northrop Grumman", "Arianespace" crew: List[str] # Names of the crew members boarding the Launcher. Such as: "Neil Armstrong", "Michael Collins", "Buzz Aldrin" @dataclass class Mission(Template): """Any planned or accomplished journey beyond Earth's atmosphere with specific objectives, either crewed or uncrewed. It includes missions to satellites, the International Space Station (ISS), other celestial bodies, and deep space.""" mention: str """ The name of the mission. Such as: "Apollo 11", "Artemis", "Mercury" """ date: str # The start date of the mission departure: str # The place from which the vehicle will be launched. Such as: "Florida", "Houston", "French Guiana" destination: str # The place or planet to which the launcher will be sent. Such as "Moon", "low-orbit", "Saturn" # This is the text to analyze text = ( "The Ares 3 mission to Mars is scheduled for 2032. The Starship rocket build by SpaceX will take off from Boca Chica," "carrying the astronauts Max Rutherford, Elena Soto, and Jake Martinez." ) # The annotation instances that take place in the text above are listed here result = [ Mission(mention='Ares 3', date='2032', departure='Boca Chica', destination='Mars'), Launcher(mention='Starship', space_company='SpaceX', crew=['Max Rutherford', 'Elena Soto', 'Jake Martinez']) ] ``` ## How to Get Started with the Model Please read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to get started with GoLLIE. The best way to load the model is using our custom `load_model` fuction. However, you can also load them using the AutoModelForCausalLM class. **Important**: Our flash attention implementation has small numerical differences compared to the attention implementation in Huggingface. You must use the flag `trust_remote_code=True` or you will get inferior results. Flash attention requires an available CUDA GPU. Running GOLLIE pre-trained models on a CPU is not supported. We plan to address this in future releases. First, install flash attention 2: ```bash pip install flash-attn --no-build-isolation pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary ``` Then you can load the model using ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HiTZ/GoLLIE-7B") model = AutoModelForCausalLM.from_pretrained("HiTZ/GoLLIE-7B", trust_remote_code=True, torch_dtype=torch.bfloat16) model.to("cuda") ``` Read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to learn how to easily define guidelines, generate model inputs and parse the output! ### Training Data This is the list of task used for training and evaluating GoLLIE. However, as demonstrated in the 🚀 [Create Custom Task notebook](https://github.com/hitz-zentroa/GoLLIE/blob/main/notebooks/Create%20Custom%20Task.ipynb) GoLLIE can perform a wide range of unseen tasks. For more info, read our [📖Paper](https://arxiv.org/abs/2310.03668). <p align="center"> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/datasets.png"> </p> ## Evaluation | Model | Supervised average F1 | Zero-shot average F1 | 🤗HuggingFace Hub | |---|:---------------------:|:--------------------:|:---------------------------------------------------------:| | GoLLIE-7B | 73.0 | 55.3 | [HiTZ/GoLLIE-7B](https://huggingface.co/HiTZ/GoLLIE-7B) | | GoLLIE-13B | 73.9 | 56.0 | [HiTZ/GoLLIE-13B](https://huggingface.co/HiTZ/GoLLIE-13B) | | GoLLIE-34B | **75.0** | **57.2** | [HiTZ/GoLLIE-34B](https://huggingface.co/HiTZ/GoLLIE-34B) | ## Environmental Impact | Model | Hardware | FLOPs | Time (h) | CO<sup>2</sup>eq (kg) | |----------------|-------------------|---------------------------|-------------------|-------------------------------------| | GoLLIE 7B | 1xA100 | 11.9e<sup>18</sup> | 44.5 | 1.57 | | GoLLIE 13B | 1xA100 | 22.7e<sup>18</sup> | 79.5 | 2.80 | | GoLLIE 34B | 2xA100 | 55.8e<sup>18</sup> | 94.6 | 6.67 | ## Citation ``` @misc{sainz2023gollie, title={GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction}, author={Oscar Sainz and Iker García-Ferrero and Rodrigo Agerri and Oier Lopez de Lacalle and German Rigau and Eneko Agirre}, year={2023}, eprint={2310.03668}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "BC5CDR", "NCBI DISEASE" ]
Non_BioNLP
<p align="center"> <br> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/GoLLIE.png" style="height: 250px;"> <h2 align="center"><b>G</b>uideline f<b>o</b>llowing <b>L</b>arge <b>L</b>anguage Model for <b>I</b>nformation <b>E</b>xtraction</h2> <br> # Model Card for GoLLIE 13B <p align="justify"> We present GoLLIE, a Large Language Model trained to follow annotation guidelines. GoLLIE outperforms previous approaches on zero-shot Information Extraction and allows the user to perform inferences with annotation schemas defined on the fly. Different from previous approaches, GoLLIE is able to follow detailed definitions and does not only rely on the knowledge already encoded in the LLM. - 💻 Code: [https://github.com/osainz59/CoLLIE/](https://github.com/hitz-zentroa/GoLLIE) - 📒 Blog Post: [GoLLIE: Guideline-following Large Language Model for Information Extraction](https://hitz-zentroa.github.io/GoLLIE/) - 📖 Paper: [GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction](https://arxiv.org/abs/2310.03668) - 🐕 GoLLIE Colection in the 🤗HuggingFace Hub: [HiTZ/gollie](https://huggingface.co/collections/HiTZ/gollie-651bf19ee315e8a224aacc4f) - 🚀 Example Jupyter Notebooks: [GoLLIE Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) </p> <p align="center"> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/zero_shot_results.png"> </p> ### Model Description - **Developed by:** [Oscar Sainz](https://osainz59.github.io/), [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Rodrigo Agerri](https://ragerri.github.io/), [Oier Lopez de Lacalle](https://oierldl.github.io/), [German Rigau](https://adimen.si.ehu.es/~rigau/) and [Eneko Agirre](https://eagirre.github.io/) - **Institution:** [HiTZ Basque Center for Language Technology](http://www.hitz.eus/) - [Ixa](https://www.ixa.eus/node/2?language=en), [University of the Basque Country UPV/EHU](https://www.ehu.eus/en/en-home) - **Model type:** Text Generation - **Language(s) (NLP):** English - **License:** LLaMA2 License for the base and merged model. Apache 2.0 for pre-trained LoRA Adapters - **Finetuned from model:** CODE-LLaMA2 ## Schema definition and inference example The labels are represented as Python classes, and the guidelines or instructions are introduced as docstrings. The model start generating after the `result = [` line. ```Python # Entity definitions @dataclass class Launcher(Template): """Refers to a vehicle designed primarily to transport payloads from the Earth's surface to space. Launchers can carry various payloads, including satellites, crewed spacecraft, and cargo, into various orbits or even beyond Earth's orbit. They are usually multi-stage vehicles that use rocket engines for propulsion.""" mention: str """ The name of the launcher vehicle. Such as: "Sturn V", "Atlas V", "Soyuz", "Ariane 5" """ space_company: str # The company that operates the launcher. Such as: "Blue origin", "ESA", "Boeing", "ISRO", "Northrop Grumman", "Arianespace" crew: List[str] # Names of the crew members boarding the Launcher. Such as: "Neil Armstrong", "Michael Collins", "Buzz Aldrin" @dataclass class Mission(Template): """Any planned or accomplished journey beyond Earth's atmosphere with specific objectives, either crewed or uncrewed. It includes missions to satellites, the International Space Station (ISS), other celestial bodies, and deep space.""" mention: str """ The name of the mission. Such as: "Apollo 11", "Artemis", "Mercury" """ date: str # The start date of the mission departure: str # The place from which the vehicle will be launched. Such as: "Florida", "Houston", "French Guiana" destination: str # The place or planet to which the launcher will be sent. Such as "Moon", "low-orbit", "Saturn" # This is the text to analyze text = ( "The Ares 3 mission to Mars is scheduled for 2032. The Starship rocket build by SpaceX will take off from Boca Chica," "carrying the astronauts Max Rutherford, Elena Soto, and Jake Martinez." ) # The annotation instances that take place in the text above are listed here result = [ Mission(mention='Ares 3', date='2032', departure='Boca Chica', destination='Mars'), Launcher(mention='Starship', space_company='SpaceX', crew=['Max Rutherford', 'Elena Soto', 'Jake Martinez']) ] ``` ## How to Get Started with the Model Please read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to get started with GoLLIE. The best way to load the model is using our custom `load_model` fuction. However, you can also load them using the AutoModelForCausalLM class. **Important**: Our flash attention implementation has small numerical differences compared to the attention implementation in Huggingface. You must use the flag `trust_remote_code=True` or you will get inferior results. Flash attention requires an available CUDA GPU. Running GOLLIE pre-trained models on a CPU is not supported. We plan to address this in future releases. First, install flash attention 2: ```bash pip install flash-attn --no-build-isolation pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary ``` Then you can load the model using ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HiTZ/GoLLIE-7B") model = AutoModelForCausalLM.from_pretrained("HiTZ/GoLLIE-7B", trust_remote_code=True, torch_dtype=torch.bfloat16) model.to("cuda") ``` Read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to learn how to easily define guidelines, generate model inputs and parse the output! ### Training Data This is the list of task used for training and evaluating GoLLIE. However, as demonstrated in the 🚀 [Create Custom Task notebook](https://github.com/hitz-zentroa/GoLLIE/blob/main/notebooks/Create%20Custom%20Task.ipynb) GoLLIE can perform a wide range of unseen tasks. For more info, read our [📖Paper](https://arxiv.org/abs/2310.03668). <p align="center"> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/datasets.png"> </p> ## Evaluation | Model | Supervised average F1 | Zero-shot average F1 | 🤗HuggingFace Hub | |---|:---------------------:|:--------------------:|:---------------------------------------------------------:| | GoLLIE-7B | 73.0 | 55.3 | [HiTZ/GoLLIE-7B](https://huggingface.co/HiTZ/GoLLIE-7B) | | GoLLIE-13B | 73.9 | 56.0 | [HiTZ/GoLLIE-13B](https://huggingface.co/HiTZ/GoLLIE-13B) | | GoLLIE-34B | **75.0** | **57.2** | [HiTZ/GoLLIE-34B](https://huggingface.co/HiTZ/GoLLIE-34B) | ## Environmental Impact | Model | Hardware | FLOPs | Time (h) | CO<sup>2</sup>eq (kg) | |----------------|-------------------|---------------------------|-------------------|-------------------------------------| | GoLLIE 7B | 1xA100 | 11.9e<sup>18</sup> | 44.5 | 1.57 | | GoLLIE 13B | 1xA100 | 22.7e<sup>18</sup> | 79.5 | 2.80 | | GoLLIE 34B | 2xA100 | 55.8e<sup>18</sup> | 94.6 | 6.67 | ## Citation ``` @misc{sainz2023gollie, title={GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction}, author={Oscar Sainz and Iker García-Ferrero and Rodrigo Agerri and Oier Lopez de Lacalle and German Rigau and Eneko Agirre}, year={2023}, eprint={2310.03668}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"datasets": ["ACE05", "bc5cdr", "conll2003", "ncbi_disease", "conll2012_ontonotesv5", "rams", "tacred", "wnut_17"], "language": ["en"], "license": "llama2", "metrics": ["f1"], "pipeline_tag": "text-generation", "tags": ["code", "text-generation-inference", "Information Extraction", "IE", "Named Entity Recogniton", "Event Extraction", "Relation Extraction", "LLaMA"]}
dataset
null
574
tsavage68/MedQA_L3_400steps_1e6rate_03beta_CSFTDPO
tsavage68
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-05-24T02:24:08Z
2024-05-24T02:28:14+00:00
4
0
--- base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT license: llama3 tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_400steps_1e6rate_03beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_400steps_1e6rate_03beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4906 - Rewards/chosen: 3.1291 - Rewards/rejected: 0.9306 - Rewards/accuracies: 0.7846 - Rewards/margins: 2.1985 - Logps/rejected: -30.7529 - Logps/chosen: -20.8982 - Logits/rejected: -0.8390 - Logits/chosen: -0.8370 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.685 | 0.0489 | 50 | 0.6334 | -0.7936 | -0.9359 | 0.7363 | 0.1423 | -36.9746 | -33.9739 | -0.7278 | -0.7271 | | 0.4052 | 0.0977 | 100 | 0.6106 | 3.7995 | 2.4858 | 0.6945 | 1.3137 | -25.5688 | -18.6634 | -0.7922 | -0.7909 | | 0.5527 | 0.1466 | 150 | 0.5749 | 3.2572 | 1.9474 | 0.7319 | 1.3099 | -27.3637 | -20.4711 | -0.8427 | -0.8414 | | 0.3441 | 0.1954 | 200 | 0.5174 | 2.5190 | 0.7455 | 0.7582 | 1.7735 | -31.3700 | -22.9318 | -0.8395 | -0.8376 | | 0.3888 | 0.2443 | 250 | 0.4758 | 3.2338 | 1.3417 | 0.7956 | 1.8921 | -29.3826 | -20.5492 | -0.8342 | -0.8323 | | 0.2873 | 0.2931 | 300 | 0.4927 | 3.0141 | 0.8326 | 0.7912 | 2.1815 | -31.0794 | -21.2815 | -0.8318 | -0.8298 | | 0.4877 | 0.3420 | 350 | 0.4903 | 3.1277 | 0.9322 | 0.7824 | 2.1956 | -30.7476 | -20.9027 | -0.8388 | -0.8368 | | 0.4649 | 0.3908 | 400 | 0.4906 | 3.1291 | 0.9306 | 0.7846 | 2.1985 | -30.7529 | -20.8982 | -0.8390 | -0.8370 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
[ "MEDQA" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_400steps_1e6rate_03beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4906 - Rewards/chosen: 3.1291 - Rewards/rejected: 0.9306 - Rewards/accuracies: 0.7846 - Rewards/margins: 2.1985 - Logps/rejected: -30.7529 - Logps/chosen: -20.8982 - Logits/rejected: -0.8390 - Logits/chosen: -0.8370 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.685 | 0.0489 | 50 | 0.6334 | -0.7936 | -0.9359 | 0.7363 | 0.1423 | -36.9746 | -33.9739 | -0.7278 | -0.7271 | | 0.4052 | 0.0977 | 100 | 0.6106 | 3.7995 | 2.4858 | 0.6945 | 1.3137 | -25.5688 | -18.6634 | -0.7922 | -0.7909 | | 0.5527 | 0.1466 | 150 | 0.5749 | 3.2572 | 1.9474 | 0.7319 | 1.3099 | -27.3637 | -20.4711 | -0.8427 | -0.8414 | | 0.3441 | 0.1954 | 200 | 0.5174 | 2.5190 | 0.7455 | 0.7582 | 1.7735 | -31.3700 | -22.9318 | -0.8395 | -0.8376 | | 0.3888 | 0.2443 | 250 | 0.4758 | 3.2338 | 1.3417 | 0.7956 | 1.8921 | -29.3826 | -20.5492 | -0.8342 | -0.8323 | | 0.2873 | 0.2931 | 300 | 0.4927 | 3.0141 | 0.8326 | 0.7912 | 2.1815 | -31.0794 | -21.2815 | -0.8318 | -0.8298 | | 0.4877 | 0.3420 | 350 | 0.4903 | 3.1277 | 0.9322 | 0.7824 | 2.1956 | -30.7476 | -20.9027 | -0.8388 | -0.8368 | | 0.4649 | 0.3908 | 400 | 0.4906 | 3.1291 | 0.9306 | 0.7846 | 2.1985 | -30.7529 | -20.8982 | -0.8390 | -0.8370 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
{"base_model": "tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license": "llama3", "tags": ["trl", "dpo", "generated_from_trainer"], "model-index": [{"name": "MedQA_L3_400steps_1e6rate_03beta_CSFTDPO", "results": []}]}
dataset
null
575
yongzx/pythia-1b-sft-hh
yongzx
text-generation
[ "transformers", "pytorch", "gpt_neox", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-08-23T11:09:39Z
2023-08-28T18:52:19+00:00
43
1
--- {} --- wandb: https://wandb.ai/eleutherai/pythia-rlhf/runs/6y83ekqy?workspace=user-yongzx Model Evals | Task |Version|Filter| Metric |Value | |Stderr| |--------------|-------|------|----------|-----:|---|-----:| |arc_challenge |Yaml |none |acc |0.2526|± |0.0127| | | |none |acc_norm |0.2773|± |0.0131| |arc_easy |Yaml |none |acc |0.5791|± |0.0101| | | |none |acc_norm |0.4912|± |0.0103| |lambada_openai|Yaml |none |perplexity|7.0516|± |0.1979| | | |none |acc |0.5684|± |0.0069| |logiqa |Yaml |none |acc |0.2166|± |0.0162| | | |none |acc_norm |0.2919|± |0.0178| |piqa |Yaml |none |acc |0.7176|± |0.0105| | | |none |acc_norm |0.6964|± |0.0107| |sciq |Yaml |none |acc |0.8460|± |0.0114| | | |none |acc_norm |0.7700|± |0.0133| |winogrande |Yaml |none |acc |0.5399|± |0.0140| |wsc |Yaml |none |acc |0.3654|± |0.0474|
[ "SCIQ" ]
Non_BioNLP
wandb: https://wandb.ai/eleutherai/pythia-rlhf/runs/6y83ekqy?workspace=user-yongzx Model Evals | Task |Version|Filter| Metric |Value | |Stderr| |--------------|-------|------|----------|-----:|---|-----:| |arc_challenge |Yaml |none |acc |0.2526|± |0.0127| | | |none |acc_norm |0.2773|± |0.0131| |arc_easy |Yaml |none |acc |0.5791|± |0.0101| | | |none |acc_norm |0.4912|± |0.0103| |lambada_openai|Yaml |none |perplexity|7.0516|± |0.1979| | | |none |acc |0.5684|± |0.0069| |logiqa |Yaml |none |acc |0.2166|± |0.0162| | | |none |acc_norm |0.2919|± |0.0178| |piqa |Yaml |none |acc |0.7176|± |0.0105| | | |none |acc_norm |0.6964|± |0.0107| |sciq |Yaml |none |acc |0.8460|± |0.0114| | | |none |acc_norm |0.7700|± |0.0133| |winogrande |Yaml |none |acc |0.5399|± |0.0140| |wsc |Yaml |none |acc |0.3654|± |0.0474|
{}
dataset
null
576
RichardErkhov/TommyZQ_-_GPT-4o-gguf
RichardErkhov
null
[ "gguf", "arxiv:2404.14619", "endpoints_compatible", "region:us" ]
2024-10-12T06:35:28Z
2024-10-12T06:53:57+00:00
216
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GPT-4o - GGUF - Model creator: https://huggingface.co/TommyZQ/ - Original model: https://huggingface.co/TommyZQ/GPT-4o/ | Name | Quant method | Size | | ---- | ---- | ---- | | [GPT-4o.Q2_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q2_K.gguf) | Q2_K | 0.39GB | | [GPT-4o.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [GPT-4o.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ3_S.gguf) | IQ3_S | 0.46GB | | [GPT-4o.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q3_K_S.gguf) | Q3_K_S | 0.46GB | | [GPT-4o.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ3_M.gguf) | IQ3_M | 0.49GB | | [GPT-4o.Q3_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q3_K.gguf) | Q3_K | 0.52GB | | [GPT-4o.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q3_K_M.gguf) | Q3_K_M | 0.52GB | | [GPT-4o.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q3_K_L.gguf) | Q3_K_L | 0.56GB | | [GPT-4o.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ4_XS.gguf) | IQ4_XS | 0.55GB | | [GPT-4o.Q4_0.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_0.gguf) | Q4_0 | 0.58GB | | [GPT-4o.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ4_NL.gguf) | IQ4_NL | 0.58GB | | [GPT-4o.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_K_S.gguf) | Q4_K_S | 0.58GB | | [GPT-4o.Q4_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_K.gguf) | Q4_K | 0.63GB | | [GPT-4o.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_K_M.gguf) | Q4_K_M | 0.63GB | | [GPT-4o.Q4_1.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_1.gguf) | Q4_1 | 0.64GB | | [GPT-4o.Q5_0.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_0.gguf) | Q5_0 | 0.7GB | | [GPT-4o.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_K_S.gguf) | Q5_K_S | 0.7GB | | [GPT-4o.Q5_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_K.gguf) | Q5_K | 0.73GB | | [GPT-4o.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [GPT-4o.Q5_1.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_1.gguf) | Q5_1 | 0.76GB | | [GPT-4o.Q6_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q6_K.gguf) | Q6_K | 0.83GB | | [GPT-4o.Q8_0.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q8_0.gguf) | Q8_0 | 1.07GB | Original model description: --- license: other license_name: apple-sample-code-license license_link: LICENSE --- # OpenELM *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari* We introduce **OpenELM**, a family of **Open**-source **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them. ## Usage We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`. You can try the model by running the following command: ``` python generate_openelm.py --model apple/OpenELM-1_1B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 ``` Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token. Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows: ``` python generate_openelm.py --model apple/OpenELM-1_1B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10 ``` Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example: ``` python generate_openelm.py --model apple/OpenELM-1_1B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL] ``` ## Main Results ### Zero-Shot | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** | ### LLM360 | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** | ### OpenLLM Leaderboard | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** | See the technical report for more results and comparison. ## Evaluation ### Setup Install the following dependencies: ```bash # install public lm-eval-harness harness_repo="public-lm-eval-harness" git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo} cd ${harness_repo} # use main branch on 03-15-2024, SHA is dc90fec git checkout dc90fec pip install -e . cd .. # 66d6242 is the main branch on 2024-04-01 pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242 pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0 ``` ### Evaluate OpenELM ```bash # OpenELM-1_1B hf_model=OpenELM-1_1B # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True tokenizer=meta-llama/Llama-2-7b-hf add_bos_token=True batch_size=1 mkdir lm_eval_output shot=0 task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2 lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=5 task=mmlu,winogrande lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=25 task=arc_challenge,crows_pairs_english lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=10 task=hellaswag lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log ``` ## Bias, Risks, and Limitations The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements. ## Citation If you find our work useful, please cite: ```BibTex @article{mehtaOpenELMEfficientLanguage2024, title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open}-source {Training} and {Inference} {Framework}}, shorttitle = {{OpenELM}}, url = {https://arxiv.org/abs/2404.14619v1}, language = {en}, urldate = {2024-04-24}, journal = {arXiv.org}, author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad}, month = apr, year = {2024}, } @inproceedings{mehta2022cvnets, author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, title = {CVNets: High Performance Library for Computer Vision}, year = {2022}, booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, series = {MM '22} } ```
[ "SCIQ" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GPT-4o - GGUF - Model creator: https://huggingface.co/TommyZQ/ - Original model: https://huggingface.co/TommyZQ/GPT-4o/ | Name | Quant method | Size | | ---- | ---- | ---- | | [GPT-4o.Q2_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q2_K.gguf) | Q2_K | 0.39GB | | [GPT-4o.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [GPT-4o.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ3_S.gguf) | IQ3_S | 0.46GB | | [GPT-4o.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q3_K_S.gguf) | Q3_K_S | 0.46GB | | [GPT-4o.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ3_M.gguf) | IQ3_M | 0.49GB | | [GPT-4o.Q3_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q3_K.gguf) | Q3_K | 0.52GB | | [GPT-4o.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q3_K_M.gguf) | Q3_K_M | 0.52GB | | [GPT-4o.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q3_K_L.gguf) | Q3_K_L | 0.56GB | | [GPT-4o.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ4_XS.gguf) | IQ4_XS | 0.55GB | | [GPT-4o.Q4_0.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_0.gguf) | Q4_0 | 0.58GB | | [GPT-4o.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.IQ4_NL.gguf) | IQ4_NL | 0.58GB | | [GPT-4o.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_K_S.gguf) | Q4_K_S | 0.58GB | | [GPT-4o.Q4_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_K.gguf) | Q4_K | 0.63GB | | [GPT-4o.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_K_M.gguf) | Q4_K_M | 0.63GB | | [GPT-4o.Q4_1.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q4_1.gguf) | Q4_1 | 0.64GB | | [GPT-4o.Q5_0.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_0.gguf) | Q5_0 | 0.7GB | | [GPT-4o.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_K_S.gguf) | Q5_K_S | 0.7GB | | [GPT-4o.Q5_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_K.gguf) | Q5_K | 0.73GB | | [GPT-4o.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [GPT-4o.Q5_1.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q5_1.gguf) | Q5_1 | 0.76GB | | [GPT-4o.Q6_K.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q6_K.gguf) | Q6_K | 0.83GB | | [GPT-4o.Q8_0.gguf](https://huggingface.co/RichardErkhov/TommyZQ_-_GPT-4o-gguf/blob/main/GPT-4o.Q8_0.gguf) | Q8_0 | 1.07GB | Original model description: --- license: other license_name: apple-sample-code-license license_link: LICENSE --- # OpenELM *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari* We introduce **OpenELM**, a family of **Open**-source **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them. ## Usage We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`. You can try the model by running the following command: ``` python generate_openelm.py --model apple/OpenELM-1_1B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 ``` Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token. Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows: ``` python generate_openelm.py --model apple/OpenELM-1_1B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10 ``` Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example: ``` python generate_openelm.py --model apple/OpenELM-1_1B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL] ``` ## Main Results ### Zero-Shot | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** | ### LLM360 | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** | ### OpenLLM Leaderboard | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** | See the technical report for more results and comparison. ## Evaluation ### Setup Install the following dependencies: ```bash # install public lm-eval-harness harness_repo="public-lm-eval-harness" git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo} cd ${harness_repo} # use main branch on 03-15-2024, SHA is dc90fec git checkout dc90fec pip install -e . cd .. # 66d6242 is the main branch on 2024-04-01 pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242 pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0 ``` ### Evaluate OpenELM ```bash # OpenELM-1_1B hf_model=OpenELM-1_1B # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True tokenizer=meta-llama/Llama-2-7b-hf add_bos_token=True batch_size=1 mkdir lm_eval_output shot=0 task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2 lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=5 task=mmlu,winogrande lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=25 task=arc_challenge,crows_pairs_english lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=10 task=hellaswag lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log ``` ## Bias, Risks, and Limitations The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements. ## Citation If you find our work useful, please cite: ```BibTex @article{mehtaOpenELMEfficientLanguage2024, title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open}-source {Training} and {Inference} {Framework}}, shorttitle = {{OpenELM}}, url = {https://arxiv.org/abs/2404.14619v1}, language = {en}, urldate = {2024-04-24}, journal = {arXiv.org}, author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad}, month = apr, year = {2024}, } @inproceedings{mehta2022cvnets, author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, title = {CVNets: High Performance Library for Computer Vision}, year = {2022}, booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, series = {MM '22} } ```
{}
dataset
null
577
Weyaxi/Einstein-v5-v0.2-7B
Weyaxi
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "conversational", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "base_model:mistral-community/Mistral-7B-v0.2", "base_model:finetune:mistral-community/Mistral-7B-v0.2", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-03-16T23:16:17Z
2024-04-04T17:25:36+00:00
25
10
--- base_model: alpindale/Mistral-7B-v0.2-hf datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction license: other tags: - axolotl - generated_from_trainer - Mistral - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math model-index: - name: Einstein-v5-v0.2-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.92 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.99 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.02 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.59 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 59.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/4dxDYjxqLOaALlh4xUXjF.png) <center><h1>❗⚠️ WARNING ⚠️❗</h1></center> ❗ This model has been deprecated due to a sliding window error in the base model's configuration. This issue has been resolved with the [following commit in the base model](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf/commit/a9c96452b03842672775338054f49f7b82c68854), and upcoming versions of the Einstein series will utilize the correct configuration in the base model. ------------- # 🔬 Einstein-v5-v0.2-7B This model is a full fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on diverse datasets. This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This model's training was sponsored by [sablo.ai](https://sablo.ai). <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: alpindale/Mistral-7B-v0.2-hf model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: true load_in_8bit: false load_in_4bit: false strict: false chat_template: chatml datasets: - path: data/merged_all.json ds_type: json type: alpaca conversation: chatml - path: data/gpteacher-instruct-special-alpaca.json ds_type: json type: gpteacher conversation: chatml - path: data/capybara_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/synthia-v1.3_sharegpt_12500.json ds_type: json type: sharegpt conversation: chatml - path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/slimorca_dedup_filtered_95k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/pippa_bagel_repo_3k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/gpt4_data_lmys_1m_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/sharegpt_gpt4_english.json ds_type: json type: sharegpt conversation: chatml dataset_prepared_path: last_run_prepared # val_set_size: 0.005 val_set_size: 0.0 do_bench_eval: true output_dir: ./Einstein-v5-Mistral-v0.2-beta-model sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: Einstein wandb_entity: wandb_watch: wandb_name: wandb_log_model: hub_model_id: Weyaxi/Einstein-v5-Mistral-v0.2-beta save_safetensors: true gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000005 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 3 # changed eval_table_size: eval_table_max_new_tokens: 128 saves_per_epoch: 3 # changed debug: deepspeed: zero3_bf16.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "<|im_end|>" unk_token: "<unk>" tokens: - "<|im_start|>" ``` </details><br> # 💬 Prompt Template You can use this prompt template while using the model: ### ChatML ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant {asistant}<|im_end|> ``` This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are helpful AI asistant."}, {"role": "user", "content": "Hello!"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` # 🔄 Quantizationed versions Quantizationed versions of this model is available. ## GGUF [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF ## ExLlamaV2 [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2 # 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v5-v0.2-7B) | Metric |Value| |---------------------------------|----:| |Avg. |65.65| |AI2 Reasoning Challenge (25-Shot)|60.92| |HellaSwag (10-Shot) |80.99| |MMLU (5-Shot) |61.02| |TruthfulQA (0-shot) |52.59| |Winogrande (5-shot) |78.69| |GSM8k (5-shot) |59.67| # 🤖 Additional information about training This model is full fine-tuned for 1 epoch. Total number of steps was 1124. <details><summary>Loss graph</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/TkzKdxZZHznGjYLWiSmLS.png) </details><br> # 🤝 Acknowledgments Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model. Thanks to all the dataset authors mentioned in the datasets section. Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model. Thanks to all open source AI community. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) If you would like to support me: [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
[ "SCIQ" ]
Non_BioNLP
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/4dxDYjxqLOaALlh4xUXjF.png) <center><h1>❗⚠️ WARNING ⚠️❗</h1></center> ❗ This model has been deprecated due to a sliding window error in the base model's configuration. This issue has been resolved with the [following commit in the base model](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf/commit/a9c96452b03842672775338054f49f7b82c68854), and upcoming versions of the Einstein series will utilize the correct configuration in the base model. ------------- # 🔬 Einstein-v5-v0.2-7B This model is a full fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on diverse datasets. This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This model's training was sponsored by [sablo.ai](https://sablo.ai). <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: alpindale/Mistral-7B-v0.2-hf model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: true load_in_8bit: false load_in_4bit: false strict: false chat_template: chatml datasets: - path: data/merged_all.json ds_type: json type: alpaca conversation: chatml - path: data/gpteacher-instruct-special-alpaca.json ds_type: json type: gpteacher conversation: chatml - path: data/capybara_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/synthia-v1.3_sharegpt_12500.json ds_type: json type: sharegpt conversation: chatml - path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/slimorca_dedup_filtered_95k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/pippa_bagel_repo_3k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/gpt4_data_lmys_1m_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/sharegpt_gpt4_english.json ds_type: json type: sharegpt conversation: chatml dataset_prepared_path: last_run_prepared # val_set_size: 0.005 val_set_size: 0.0 do_bench_eval: true output_dir: ./Einstein-v5-Mistral-v0.2-beta-model sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: Einstein wandb_entity: wandb_watch: wandb_name: wandb_log_model: hub_model_id: Weyaxi/Einstein-v5-Mistral-v0.2-beta save_safetensors: true gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000005 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 3 # changed eval_table_size: eval_table_max_new_tokens: 128 saves_per_epoch: 3 # changed debug: deepspeed: zero3_bf16.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "<|im_end|>" unk_token: "<unk>" tokens: - "<|im_start|>" ``` </details><br> # 💬 Prompt Template You can use this prompt template while using the model: ### ChatML ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant {asistant}<|im_end|> ``` This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are helpful AI asistant."}, {"role": "user", "content": "Hello!"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` # 🔄 Quantizationed versions Quantizationed versions of this model is available. ## GGUF [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF ## ExLlamaV2 [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2 # 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v5-v0.2-7B) | Metric |Value| |---------------------------------|----:| |Avg. |65.65| |AI2 Reasoning Challenge (25-Shot)|60.92| |HellaSwag (10-Shot) |80.99| |MMLU (5-Shot) |61.02| |TruthfulQA (0-shot) |52.59| |Winogrande (5-shot) |78.69| |GSM8k (5-shot) |59.67| # 🤖 Additional information about training This model is full fine-tuned for 1 epoch. Total number of steps was 1124. <details><summary>Loss graph</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/TkzKdxZZHznGjYLWiSmLS.png) </details><br> # 🤝 Acknowledgments Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model. Thanks to all the dataset authors mentioned in the datasets section. Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model. Thanks to all open source AI community. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) If you would like to support me: [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
{"base_model": "alpindale/Mistral-7B-v0.2-hf", "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval", "allenai/WildChat", "microsoft/orca-math-word-problems-200k", "openchat/openchat_sharegpt4_dataset", "teknium/GPTeacher-General-Instruct", "m-a-p/CodeFeedback-Filtered-Instruction"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math"], "model-index": [{"name": "Einstein-v5-v0.2-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 60.92, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 80.99, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 61.02, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 52.59}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 78.69, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 59.67, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B", "name": "Open LLM Leaderboard"}}]}]}
dataset
null
578
ntc-ai/SDXL-LoRA-slider.striking-a-confident-pose
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2024-01-26T19:28:37Z
2024-01-26T19:28:41+00:00
23
1
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/striking a confident pose.../striking a confident pose_17_3.0.png widget: - text: striking a confident pose output: url: images/striking a confident pose_17_3.0.png - text: striking a confident pose output: url: images/striking a confident pose_19_3.0.png - text: striking a confident pose output: url: images/striking a confident pose_20_3.0.png - text: striking a confident pose output: url: images/striking a confident pose_21_3.0.png - text: striking a confident pose output: url: images/striking a confident pose_22_3.0.png inference: false instance_prompt: striking a confident pose --- # ntcai.xyz slider - striking a confident pose (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/striking a confident pose_17_-3.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_17_0.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_17_3.0.png" width=256 height=256 /> | | <img src="images/striking a confident pose_19_-3.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_19_0.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_19_3.0.png" width=256 height=256 /> | | <img src="images/striking a confident pose_20_-3.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_20_0.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` striking a confident pose ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.striking-a-confident-pose', weight_name='striking a confident pose.safetensors', adapter_name="striking a confident pose") # Activate the LoRA pipe.set_adapters(["striking a confident pose"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, striking a confident pose" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
Non_BioNLP
# ntcai.xyz slider - striking a confident pose (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/striking a confident pose_17_-3.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_17_0.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_17_3.0.png" width=256 height=256 /> | | <img src="images/striking a confident pose_19_-3.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_19_0.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_19_3.0.png" width=256 height=256 /> | | <img src="images/striking a confident pose_20_-3.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_20_0.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` striking a confident pose ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.striking-a-confident-pose', weight_name='striking a confident pose.safetensors', adapter_name="striking a confident pose") # Activate the LoRA pipe.set_adapters(["striking a confident pose"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, striking a confident pose" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
{"base_model": "stabilityai/stable-diffusion-xl-base-1.0", "language": ["en"], "license": "mit", "tags": ["text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "diffusers"], "thumbnail": "images/evaluate/striking a confident pose.../striking a confident pose_17_3.0.png", "widget": [{"text": "striking a confident pose", "output": {"url": "images/striking a confident pose_17_3.0.png"}}, {"text": "striking a confident pose", "output": {"url": "images/striking a confident pose_19_3.0.png"}}, {"text": "striking a confident pose", "output": {"url": "images/striking a confident pose_20_3.0.png"}}, {"text": "striking a confident pose", "output": {"url": "images/striking a confident pose_21_3.0.png"}}, {"text": "striking a confident pose", "output": {"url": "images/striking a confident pose_22_3.0.png"}}], "inference": false, "instance_prompt": "striking a confident pose"}
dataset
null
579
LoneStriker/BioMistral-7B-DARE-GGUF
LoneStriker
text-generation
[ "transformers", "gguf", "mergekit", "merge", "dare", "medical", "biology", "text-generation", "en", "fr", "nl", "es", "it", "pl", "ro", "de", "dataset:pubmed", "arxiv:2311.03099", "arxiv:2306.01708", "arxiv:2402.10373", "base_model:BioMistral/BioMistral-7B", "base_model:merge:BioMistral/BioMistral-7B", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:merge:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2024-02-19T15:10:06Z
2024-02-19T15:30:23+00:00
19
0
--- base_model: - BioMistral/BioMistral-7B - mistralai/Mistral-7B-Instruct-v0.1 datasets: - pubmed language: - en - fr - nl - es - it - pl - ro - de library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - mergekit - merge - dare - medical - biology --- # BioMistral-7B-mistral7instruct-dare This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base. ### Models Merged The following models were included in the merge: * [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-Instruct-v0.1 # No parameters necessary for base model - model: BioMistral/BioMistral-7B parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` <p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "MEDQA", "PUBMEDQA" ]
BioNLP
# BioMistral-7B-mistral7instruct-dare This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base. ### Models Merged The following models were included in the merge: * [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-Instruct-v0.1 # No parameters necessary for base model - model: BioMistral/BioMistral-7B parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` <p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"base_model": ["BioMistral/BioMistral-7B", "mistralai/Mistral-7B-Instruct-v0.1"], "datasets": ["pubmed"], "language": ["en", "fr", "nl", "es", "it", "pl", "ro", "de"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["mergekit", "merge", "dare", "medical", "biology"]}
dataset
null
580
tiaTai/Osiris_asr_model
tiaTai
automatic-speech-recognition
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-01-02T16:50:50Z
2024-01-02T19:58:46+00:00
7
0
--- base_model: facebook/wav2vec2-base license: apache-2.0 metrics: - wer tags: - generated_from_trainer model-index: - name: Osiris_asr_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Osiris_asr_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0600 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 45.8209 | 50.0 | 50 | 21.0347 | 1.0182 | | 10.2898 | 100.0 | 100 | 5.1552 | 1.0 | | 5.7188 | 150.0 | 150 | 4.9140 | 1.0 | | 5.3358 | 200.0 | 200 | 4.7650 | 1.0 | | 5.1381 | 250.0 | 250 | 4.6797 | 1.0 | | 4.9841 | 300.0 | 300 | 4.6168 | 1.0 | | 4.9255 | 350.0 | 350 | 4.5741 | 1.0 | | 4.8353 | 400.0 | 400 | 4.5321 | 1.0 | | 4.7704 | 450.0 | 450 | 4.5100 | 1.0 | | 4.6257 | 500.0 | 500 | 3.9382 | 1.0 | | 3.8106 | 550.0 | 550 | 3.3939 | 1.0 | | 3.5095 | 600.0 | 600 | 3.2887 | 1.0 | | 3.3716 | 650.0 | 650 | 3.1967 | 1.0 | | 3.3025 | 700.0 | 700 | 3.1539 | 1.0 | | 3.2532 | 750.0 | 750 | 3.1477 | 1.0 | | 3.2086 | 800.0 | 800 | 3.0984 | 1.0 | | 3.1889 | 850.0 | 850 | 3.0857 | 1.0 | | 3.162 | 900.0 | 900 | 3.0819 | 1.0 | | 3.1411 | 950.0 | 950 | 3.0610 | 1.0 | | 3.1397 | 1000.0 | 1000 | 3.0600 | 1.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
[ "OSIRIS" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Osiris_asr_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0600 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 45.8209 | 50.0 | 50 | 21.0347 | 1.0182 | | 10.2898 | 100.0 | 100 | 5.1552 | 1.0 | | 5.7188 | 150.0 | 150 | 4.9140 | 1.0 | | 5.3358 | 200.0 | 200 | 4.7650 | 1.0 | | 5.1381 | 250.0 | 250 | 4.6797 | 1.0 | | 4.9841 | 300.0 | 300 | 4.6168 | 1.0 | | 4.9255 | 350.0 | 350 | 4.5741 | 1.0 | | 4.8353 | 400.0 | 400 | 4.5321 | 1.0 | | 4.7704 | 450.0 | 450 | 4.5100 | 1.0 | | 4.6257 | 500.0 | 500 | 3.9382 | 1.0 | | 3.8106 | 550.0 | 550 | 3.3939 | 1.0 | | 3.5095 | 600.0 | 600 | 3.2887 | 1.0 | | 3.3716 | 650.0 | 650 | 3.1967 | 1.0 | | 3.3025 | 700.0 | 700 | 3.1539 | 1.0 | | 3.2532 | 750.0 | 750 | 3.1477 | 1.0 | | 3.2086 | 800.0 | 800 | 3.0984 | 1.0 | | 3.1889 | 850.0 | 850 | 3.0857 | 1.0 | | 3.162 | 900.0 | 900 | 3.0819 | 1.0 | | 3.1411 | 950.0 | 950 | 3.0610 | 1.0 | | 3.1397 | 1000.0 | 1000 | 3.0600 | 1.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"base_model": "facebook/wav2vec2-base", "license": "apache-2.0", "metrics": ["wer"], "tags": ["generated_from_trainer"], "model-index": [{"name": "Osiris_asr_model", "results": []}]}
dataset
null
581
lesso02/e26ef62f-ad75-4c7e-b75f-89d444182696
lesso02
null
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "8-bit", "bitsandbytes", "region:us" ]
2025-01-14T01:33:26Z
2025-01-14T05:17:53+00:00
1
0
--- base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 library_name: peft tags: - axolotl - generated_from_trainer model-index: - name: e26ef62f-ad75-4c7e-b75f-89d444182696 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 bf16: true chat_template: llama3 datasets: - data_files: - cb306a6787029623_train_data.json ds_type: json format: custom path: /workspace/input_data/cb306a6787029623_train_data.json type: field_input: context field_instruction: question field_output: final_decision format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: lesso02/e26ef62f-ad75-4c7e-b75f-89d444182696 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/cb306a6787029623_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 512 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b25d09e0-49c5-487b-9eac-8b665ef30094 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b25d09e0-49c5-487b-9eac-8b665ef30094 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e26ef62f-ad75-4c7e-b75f-89d444182696 This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3086 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 13.0737 | 0.0000 | 1 | 13.4156 | | 13.3146 | 0.0002 | 5 | 9.1773 | | 3.9244 | 0.0004 | 10 | 2.9126 | | 1.6555 | 0.0006 | 15 | 1.1353 | | 1.0088 | 0.0008 | 20 | 0.4048 | | 0.5886 | 0.0010 | 25 | 0.3086 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
[ "PUBMEDQA" ]
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 bf16: true chat_template: llama3 datasets: - data_files: - cb306a6787029623_train_data.json ds_type: json format: custom path: /workspace/input_data/cb306a6787029623_train_data.json type: field_input: context field_instruction: question field_output: final_decision format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: lesso02/e26ef62f-ad75-4c7e-b75f-89d444182696 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/cb306a6787029623_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 512 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b25d09e0-49c5-487b-9eac-8b665ef30094 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b25d09e0-49c5-487b-9eac-8b665ef30094 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e26ef62f-ad75-4c7e-b75f-89d444182696 This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3086 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 13.0737 | 0.0000 | 1 | 13.4156 | | 13.3146 | 0.0002 | 5 | 9.1773 | | 3.9244 | 0.0004 | 10 | 2.9126 | | 1.6555 | 0.0006 | 15 | 1.1353 | | 1.0088 | 0.0008 | 20 | 0.4048 | | 0.5886 | 0.0010 | 25 | 0.3086 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
{"base_model": "rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "library_name": "peft", "tags": ["axolotl", "generated_from_trainer"], "model-index": [{"name": "e26ef62f-ad75-4c7e-b75f-89d444182696", "results": []}]}
dataset
null
582
lambdavi/span-marker-luke-base-conll2003
lambdavi
token-classification
[ "span-marker", "safetensors", "token-classification", "ner", "named-entity-recognition", "generated_from_span_marker_trainer", "dataset:conll2003", "model-index", "region:us" ]
2024-01-08T12:07:34Z
2024-01-09T22:26:28+00:00
5
2
--- datasets: - conll2003 library_name: span-marker metrics: - precision - recall - f1 pipeline_tag: token-classification tags: - span-marker - token-classification - ner - named-entity-recognition - generated_from_span_marker_trainer widget: - text: Atlanta Games silver medal winner Edwards has called on other leading athletes to take part in the Sarajevo meeting--a goodwill gesture towards Bosnia as it recovers from the war in the Balkans--two days after the grand prix final in Milan. - text: Portsmouth:Middlesex 199 and 426 (J. Pooley 111,M. Ramprakash 108,M. Gatting 83), Hampshire 232 and 109-5. - text: Poland's Foreign Minister Dariusz Rosati will visit Yugoslavia on September 3 and 4 to revive a dialogue between the two governments which was effectively frozen in 1992,PAP news agency reported on Friday. - text: The authorities are apparently extremely afraid of any political and social discontent," said Xiao,in Manila to attend an Amnesty International conference on human rights in China. - text: American Nate Miller successfully defended his WBA cruiserweight title when he knocked out compatriot James Heath in the seventh round of their bout on Saturday. model-index: - name: SpanMarker results: - task: type: token-classification name: Named Entity Recognition dataset: name: Unknown type: conll2003 split: eval metrics: - type: f1 value: 0.9550004205568171 name: F1 - type: precision value: 0.9542780299209951 name: Precision - type: recall value: 0.9557239057239058 name: Recall --- # SpanMarker This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [conll2003](https://huggingface.co/datasets/conll2003) dataset that can be used for Named Entity Recognition. ## Model Details Important Note: I used the Tokenizer from "roberta-base". ```diff from span_marker import SpanMarkerModel from span_marker.tokenizer import SpanMarkerTokenizer # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-base-conll2003") +tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.tokenizer.config) +model.set_tokenizer(tokenizer) # Run inference entities = model.predict("Portsmouth:Middlesex 199 and 426 (J. Pooley 111,M. Ramprakash 108,M. Gatting 83), Hampshire 232 and 109-5.") ``` ### Model Description - **Model Type:** SpanMarker <!-- - **Encoder:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Maximum Entity Length:** 8 words - **Training Dataset:** [conll2003](https://huggingface.co/datasets/conll2003) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER) - **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf) ### Model Labels | Label | Examples | |:------|:--------------------------------------------------------------| | LOC | "Germany", "BRUSSELS", "Britain" | | MISC | "German", "British", "EU-wide" | | ORG | "European Commission", "EU", "European Union" | | PER | "Werner Zwingmann", "Nikolaus van der Pas", "Peter Blackburn" | ## Uses ### Direct Use for Inference ```python from span_marker import SpanMarkerModel from span_marker.tokenizer import SpanMarkerTokenizer # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-base-conll2003") tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.tokenizer.config) model.set_tokenizer(tokenizer) # Run inference entities = model.predict("Portsmouth:Middlesex 199 and 426 (J. Pooley 111,M. Ramprakash 108,M. Gatting 83), Hampshire 232 and 109-5.") ``` ### Downstream Use You can finetune this model on your own dataset. <details><summary>Click to expand</summary> ```python from span_marker import SpanMarkerModel, Trainer # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("span_marker_model_id") # Specify a Dataset with "tokens" and "ner_tag" columns dataset = load_dataset("conll2003") # For example CoNLL2003 # Initialize a Trainer using the pretrained model & dataset trainer = Trainer( model=model, train_dataset=dataset["train"], eval_dataset=dataset["validation"], ) trainer.train() trainer.save_model("span_marker_model_id-finetuned") ``` </details> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:----------------------|:----|:--------|:----| | Sentence length | 1 | 14.5019 | 113 | | Entities per sentence | 0 | 1.6736 | 20 | ### Training Hyperparameters - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training Results | Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy | |:-----:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:| | 1.0 | 883 | 0.0123 | 0.9293 | 0.9274 | 0.9284 | 0.9848 | | 2.0 | 1766 | 0.0089 | 0.9412 | 0.9456 | 0.9434 | 0.9882 | | 3.0 | 2649 | 0.0077 | 0.9499 | 0.9505 | 0.9502 | 0.9893 | | 4.0 | 3532 | 0.0070 | 0.9527 | 0.9537 | 0.9532 | 0.9900 | | 5.0 | 4415 | 0.0068 | 0.9543 | 0.9557 | 0.9550 | 0.9902 | ### Framework Versions - Python: 3.10.12 - SpanMarker: 1.5.0 - Transformers: 4.36.0 - PyTorch: 2.0.0 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ``` @software{Aarsen_SpanMarker, author = {Aarsen, Tom}, license = {Apache-2.0}, title = {{SpanMarker for Named Entity Recognition}}, url = {https://github.com/tomaarsen/SpanMarkerNER} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "MEDAL" ]
Non_BioNLP
# SpanMarker This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [conll2003](https://huggingface.co/datasets/conll2003) dataset that can be used for Named Entity Recognition. ## Model Details Important Note: I used the Tokenizer from "roberta-base". ```diff from span_marker import SpanMarkerModel from span_marker.tokenizer import SpanMarkerTokenizer # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-base-conll2003") +tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.tokenizer.config) +model.set_tokenizer(tokenizer) # Run inference entities = model.predict("Portsmouth:Middlesex 199 and 426 (J. Pooley 111,M. Ramprakash 108,M. Gatting 83), Hampshire 232 and 109-5.") ``` ### Model Description - **Model Type:** SpanMarker <!-- - **Encoder:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Maximum Entity Length:** 8 words - **Training Dataset:** [conll2003](https://huggingface.co/datasets/conll2003) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER) - **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf) ### Model Labels | Label | Examples | |:------|:--------------------------------------------------------------| | LOC | "Germany", "BRUSSELS", "Britain" | | MISC | "German", "British", "EU-wide" | | ORG | "European Commission", "EU", "European Union" | | PER | "Werner Zwingmann", "Nikolaus van der Pas", "Peter Blackburn" | ## Uses ### Direct Use for Inference ```python from span_marker import SpanMarkerModel from span_marker.tokenizer import SpanMarkerTokenizer # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-base-conll2003") tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.tokenizer.config) model.set_tokenizer(tokenizer) # Run inference entities = model.predict("Portsmouth:Middlesex 199 and 426 (J. Pooley 111,M. Ramprakash 108,M. Gatting 83), Hampshire 232 and 109-5.") ``` ### Downstream Use You can finetune this model on your own dataset. <details><summary>Click to expand</summary> ```python from span_marker import SpanMarkerModel, Trainer # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("span_marker_model_id") # Specify a Dataset with "tokens" and "ner_tag" columns dataset = load_dataset("conll2003") # For example CoNLL2003 # Initialize a Trainer using the pretrained model & dataset trainer = Trainer( model=model, train_dataset=dataset["train"], eval_dataset=dataset["validation"], ) trainer.train() trainer.save_model("span_marker_model_id-finetuned") ``` </details> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:----------------------|:----|:--------|:----| | Sentence length | 1 | 14.5019 | 113 | | Entities per sentence | 0 | 1.6736 | 20 | ### Training Hyperparameters - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training Results | Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy | |:-----:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:| | 1.0 | 883 | 0.0123 | 0.9293 | 0.9274 | 0.9284 | 0.9848 | | 2.0 | 1766 | 0.0089 | 0.9412 | 0.9456 | 0.9434 | 0.9882 | | 3.0 | 2649 | 0.0077 | 0.9499 | 0.9505 | 0.9502 | 0.9893 | | 4.0 | 3532 | 0.0070 | 0.9527 | 0.9537 | 0.9532 | 0.9900 | | 5.0 | 4415 | 0.0068 | 0.9543 | 0.9557 | 0.9550 | 0.9902 | ### Framework Versions - Python: 3.10.12 - SpanMarker: 1.5.0 - Transformers: 4.36.0 - PyTorch: 2.0.0 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ``` @software{Aarsen_SpanMarker, author = {Aarsen, Tom}, license = {Apache-2.0}, title = {{SpanMarker for Named Entity Recognition}}, url = {https://github.com/tomaarsen/SpanMarkerNER} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"datasets": ["conll2003"], "library_name": "span-marker", "metrics": ["precision", "recall", "f1"], "pipeline_tag": "token-classification", "tags": ["span-marker", "token-classification", "ner", "named-entity-recognition", "generated_from_span_marker_trainer"], "widget": [{"text": "Atlanta Games silver medal winner Edwards has called on other leading athletes to take part in the Sarajevo meeting--a goodwill gesture towards Bosnia as it recovers from the war in the Balkans--two days after the grand prix final in Milan."}, {"text": "Portsmouth:Middlesex 199 and 426 (J. Pooley 111,M. Ramprakash 108,M. Gatting 83), Hampshire 232 and 109-5."}, {"text": "Poland's Foreign Minister Dariusz Rosati will visit Yugoslavia on September 3 and 4 to revive a dialogue between the two governments which was effectively frozen in 1992,PAP news agency reported on Friday."}, {"text": "The authorities are apparently extremely afraid of any political and social discontent,\" said Xiao,in Manila to attend an Amnesty International conference on human rights in China."}, {"text": "American Nate Miller successfully defended his WBA cruiserweight title when he knocked out compatriot James Heath in the seventh round of their bout on Saturday."}], "model-index": [{"name": "SpanMarker", "results": [{"task": {"type": "token-classification", "name": "Named Entity Recognition"}, "dataset": {"name": "Unknown", "type": "conll2003", "split": "eval"}, "metrics": [{"type": "f1", "value": 0.9550004205568171, "name": "F1"}, {"type": "precision", "value": 0.9542780299209951, "name": "Precision"}, {"type": "recall", "value": 0.9557239057239058, "name": "Recall"}]}]}]}
dataset
null
583
jinaai/jina-embeddings-v2-base-es
jinaai
feature-extraction
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "es", "en", "arxiv:2108.12409", "arxiv:2402.17016", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "region:eu" ]
2024-01-24T09:54:03Z
2025-01-06T16:27:28+00:00
34,138
33
--- language: - es - en license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb inference: false model-index: - name: jina-embeddings-v2-base-es results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 74.25373134328358 - type: ap value: 37.05201236793268 - type: f1 value: 68.16770391201077 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 78.30885 - type: ap value: 73.01622441156408 - type: f1 value: 78.20769284466313 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.324 - type: f1 value: 37.89543008761673 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.678000000000004 - type: f1 value: 38.122639506976 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 23.968999999999998 - type: map_at_10 value: 40.691 - type: map_at_100 value: 41.713 - type: map_at_1000 value: 41.719 - type: map_at_3 value: 35.42 - type: map_at_5 value: 38.442 - type: mrr_at_1 value: 24.395 - type: mrr_at_10 value: 40.853 - type: mrr_at_100 value: 41.869 - type: mrr_at_1000 value: 41.874 - type: mrr_at_3 value: 35.68 - type: mrr_at_5 value: 38.572 - type: ndcg_at_1 value: 23.968999999999998 - type: ndcg_at_10 value: 50.129999999999995 - type: ndcg_at_100 value: 54.364000000000004 - type: ndcg_at_1000 value: 54.494 - type: ndcg_at_3 value: 39.231 - type: ndcg_at_5 value: 44.694 - type: precision_at_1 value: 23.968999999999998 - type: precision_at_10 value: 8.036999999999999 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.761 - type: precision_at_5 value: 12.717 - type: recall_at_1 value: 23.968999999999998 - type: recall_at_10 value: 80.36999999999999 - type: recall_at_100 value: 98.578 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 50.28399999999999 - type: recall_at_5 value: 63.585 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 41.54886683150053 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.186028697637234 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 61.19432643698725 - type: mrr value: 75.28646176845622 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 86.3828259381228 - type: cos_sim_spearman value: 83.04647058342209 - type: euclidean_pearson value: 84.02895346096244 - type: euclidean_spearman value: 82.34524978635342 - type: manhattan_pearson value: 84.35030723233426 - type: manhattan_spearman value: 83.17177464337936 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.25649350649351 - type: f1 value: 85.22320474023192 - task: type: Clustering dataset: name: MTEB BigPatentClustering type: jinaai/big-patent-clustering config: default split: test revision: 62d5330920bca426ce9d3c76ea914f15fc83e891 metrics: - type: v_measure value: 20.42929408254094 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.165318177498136 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 28.89030154229562 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.119 - type: map_at_10 value: 42.092 - type: map_at_100 value: 43.506 - type: map_at_1000 value: 43.631 - type: map_at_3 value: 38.373000000000005 - type: map_at_5 value: 40.501 - type: mrr_at_1 value: 38.196999999999996 - type: mrr_at_10 value: 48.237 - type: mrr_at_100 value: 48.914 - type: mrr_at_1000 value: 48.959 - type: mrr_at_3 value: 45.279 - type: mrr_at_5 value: 47.11 - type: ndcg_at_1 value: 38.196999999999996 - type: ndcg_at_10 value: 48.849 - type: ndcg_at_100 value: 53.713 - type: ndcg_at_1000 value: 55.678000000000004 - type: ndcg_at_3 value: 43.546 - type: ndcg_at_5 value: 46.009 - type: precision_at_1 value: 38.196999999999996 - type: precision_at_10 value: 9.642000000000001 - type: precision_at_100 value: 1.5190000000000001 - type: precision_at_1000 value: 0.199 - type: precision_at_3 value: 21.65 - type: precision_at_5 value: 15.708 - type: recall_at_1 value: 30.119 - type: recall_at_10 value: 61.788 - type: recall_at_100 value: 82.14399999999999 - type: recall_at_1000 value: 95.003 - type: recall_at_3 value: 45.772 - type: recall_at_5 value: 53.04600000000001 - type: map_at_1 value: 28.979 - type: map_at_10 value: 37.785000000000004 - type: map_at_100 value: 38.945 - type: map_at_1000 value: 39.071 - type: map_at_3 value: 35.083999999999996 - type: map_at_5 value: 36.571999999999996 - type: mrr_at_1 value: 36.242000000000004 - type: mrr_at_10 value: 43.552 - type: mrr_at_100 value: 44.228 - type: mrr_at_1000 value: 44.275999999999996 - type: mrr_at_3 value: 41.359 - type: mrr_at_5 value: 42.598 - type: ndcg_at_1 value: 36.242000000000004 - type: ndcg_at_10 value: 42.94 - type: ndcg_at_100 value: 47.343 - type: ndcg_at_1000 value: 49.538 - type: ndcg_at_3 value: 39.086999999999996 - type: ndcg_at_5 value: 40.781 - type: precision_at_1 value: 36.242000000000004 - type: precision_at_10 value: 7.954999999999999 - type: precision_at_100 value: 1.303 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 18.556 - type: precision_at_5 value: 13.145999999999999 - type: recall_at_1 value: 28.979 - type: recall_at_10 value: 51.835 - type: recall_at_100 value: 70.47 - type: recall_at_1000 value: 84.68299999999999 - type: recall_at_3 value: 40.410000000000004 - type: recall_at_5 value: 45.189 - type: map_at_1 value: 37.878 - type: map_at_10 value: 49.903 - type: map_at_100 value: 50.797000000000004 - type: map_at_1000 value: 50.858000000000004 - type: map_at_3 value: 46.526 - type: map_at_5 value: 48.615 - type: mrr_at_1 value: 43.135 - type: mrr_at_10 value: 53.067 - type: mrr_at_100 value: 53.668000000000006 - type: mrr_at_1000 value: 53.698 - type: mrr_at_3 value: 50.449 - type: mrr_at_5 value: 52.117000000000004 - type: ndcg_at_1 value: 43.135 - type: ndcg_at_10 value: 55.641 - type: ndcg_at_100 value: 59.427 - type: ndcg_at_1000 value: 60.655 - type: ndcg_at_3 value: 49.969 - type: ndcg_at_5 value: 53.075 - type: precision_at_1 value: 43.135 - type: precision_at_10 value: 8.997 - type: precision_at_100 value: 1.1809999999999998 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.215 - type: precision_at_5 value: 15.586 - type: recall_at_1 value: 37.878 - type: recall_at_10 value: 69.405 - type: recall_at_100 value: 86.262 - type: recall_at_1000 value: 95.012 - type: recall_at_3 value: 54.458 - type: recall_at_5 value: 61.965 - type: map_at_1 value: 24.853 - type: map_at_10 value: 32.402 - type: map_at_100 value: 33.417 - type: map_at_1000 value: 33.498 - type: map_at_3 value: 30.024 - type: map_at_5 value: 31.407 - type: mrr_at_1 value: 26.667 - type: mrr_at_10 value: 34.399 - type: mrr_at_100 value: 35.284 - type: mrr_at_1000 value: 35.345 - type: mrr_at_3 value: 32.109 - type: mrr_at_5 value: 33.375 - type: ndcg_at_1 value: 26.667 - type: ndcg_at_10 value: 36.854 - type: ndcg_at_100 value: 42.196 - type: ndcg_at_1000 value: 44.303 - type: ndcg_at_3 value: 32.186 - type: ndcg_at_5 value: 34.512 - type: precision_at_1 value: 26.667 - type: precision_at_10 value: 5.559 - type: precision_at_100 value: 0.88 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 13.333 - type: precision_at_5 value: 9.379 - type: recall_at_1 value: 24.853 - type: recall_at_10 value: 48.636 - type: recall_at_100 value: 73.926 - type: recall_at_1000 value: 89.94 - type: recall_at_3 value: 36.266 - type: recall_at_5 value: 41.723 - type: map_at_1 value: 14.963999999999999 - type: map_at_10 value: 22.591 - type: map_at_100 value: 23.735999999999997 - type: map_at_1000 value: 23.868000000000002 - type: map_at_3 value: 20.093 - type: map_at_5 value: 21.499 - type: mrr_at_1 value: 18.407999999999998 - type: mrr_at_10 value: 26.863 - type: mrr_at_100 value: 27.87 - type: mrr_at_1000 value: 27.947 - type: mrr_at_3 value: 24.254 - type: mrr_at_5 value: 25.784000000000002 - type: ndcg_at_1 value: 18.407999999999998 - type: ndcg_at_10 value: 27.549 - type: ndcg_at_100 value: 33.188 - type: ndcg_at_1000 value: 36.312 - type: ndcg_at_3 value: 22.862 - type: ndcg_at_5 value: 25.130999999999997 - type: precision_at_1 value: 18.407999999999998 - type: precision_at_10 value: 5.087 - type: precision_at_100 value: 0.923 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 10.987 - type: precision_at_5 value: 8.209 - type: recall_at_1 value: 14.963999999999999 - type: recall_at_10 value: 38.673 - type: recall_at_100 value: 63.224999999999994 - type: recall_at_1000 value: 85.443 - type: recall_at_3 value: 25.840000000000003 - type: recall_at_5 value: 31.503999999999998 - type: map_at_1 value: 27.861000000000004 - type: map_at_10 value: 37.562 - type: map_at_100 value: 38.906 - type: map_at_1000 value: 39.021 - type: map_at_3 value: 34.743 - type: map_at_5 value: 36.168 - type: mrr_at_1 value: 34.455999999999996 - type: mrr_at_10 value: 43.428 - type: mrr_at_100 value: 44.228 - type: mrr_at_1000 value: 44.278 - type: mrr_at_3 value: 41.001 - type: mrr_at_5 value: 42.315000000000005 - type: ndcg_at_1 value: 34.455999999999996 - type: ndcg_at_10 value: 43.477 - type: ndcg_at_100 value: 48.953 - type: ndcg_at_1000 value: 51.19200000000001 - type: ndcg_at_3 value: 38.799 - type: ndcg_at_5 value: 40.743 - type: precision_at_1 value: 34.455999999999996 - type: precision_at_10 value: 7.902000000000001 - type: precision_at_100 value: 1.244 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 18.511 - type: precision_at_5 value: 12.859000000000002 - type: recall_at_1 value: 27.861000000000004 - type: recall_at_10 value: 55.36 - type: recall_at_100 value: 78.384 - type: recall_at_1000 value: 93.447 - type: recall_at_3 value: 41.926 - type: recall_at_5 value: 47.257 - type: map_at_1 value: 26.375 - type: map_at_10 value: 35.571000000000005 - type: map_at_100 value: 36.785000000000004 - type: map_at_1000 value: 36.905 - type: map_at_3 value: 32.49 - type: map_at_5 value: 34.123999999999995 - type: mrr_at_1 value: 32.647999999999996 - type: mrr_at_10 value: 40.598 - type: mrr_at_100 value: 41.484 - type: mrr_at_1000 value: 41.546 - type: mrr_at_3 value: 37.9 - type: mrr_at_5 value: 39.401 - type: ndcg_at_1 value: 32.647999999999996 - type: ndcg_at_10 value: 41.026 - type: ndcg_at_100 value: 46.365 - type: ndcg_at_1000 value: 48.876 - type: ndcg_at_3 value: 35.843 - type: ndcg_at_5 value: 38.118 - type: precision_at_1 value: 32.647999999999996 - type: precision_at_10 value: 7.443 - type: precision_at_100 value: 1.18 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 16.819 - type: precision_at_5 value: 11.985999999999999 - type: recall_at_1 value: 26.375 - type: recall_at_10 value: 52.471000000000004 - type: recall_at_100 value: 75.354 - type: recall_at_1000 value: 92.35 - type: recall_at_3 value: 37.893 - type: recall_at_5 value: 43.935 - type: map_at_1 value: 25.012666666666668 - type: map_at_10 value: 33.685833333333335 - type: map_at_100 value: 34.849250000000005 - type: map_at_1000 value: 34.970083333333335 - type: map_at_3 value: 31.065083333333334 - type: map_at_5 value: 32.494416666666666 - type: mrr_at_1 value: 29.772666666666662 - type: mrr_at_10 value: 37.824666666666666 - type: mrr_at_100 value: 38.66741666666666 - type: mrr_at_1000 value: 38.72916666666666 - type: mrr_at_3 value: 35.54575 - type: mrr_at_5 value: 36.81524999999999 - type: ndcg_at_1 value: 29.772666666666662 - type: ndcg_at_10 value: 38.78241666666666 - type: ndcg_at_100 value: 43.84591666666667 - type: ndcg_at_1000 value: 46.275416666666665 - type: ndcg_at_3 value: 34.33416666666667 - type: ndcg_at_5 value: 36.345166666666664 - type: precision_at_1 value: 29.772666666666662 - type: precision_at_10 value: 6.794916666666667 - type: precision_at_100 value: 1.106416666666667 - type: precision_at_1000 value: 0.15033333333333335 - type: precision_at_3 value: 15.815083333333336 - type: precision_at_5 value: 11.184166666666664 - type: recall_at_1 value: 25.012666666666668 - type: recall_at_10 value: 49.748500000000014 - type: recall_at_100 value: 72.11341666666667 - type: recall_at_1000 value: 89.141 - type: recall_at_3 value: 37.242999999999995 - type: recall_at_5 value: 42.49033333333333 - type: map_at_1 value: 23.177 - type: map_at_10 value: 29.310000000000002 - type: map_at_100 value: 30.188 - type: map_at_1000 value: 30.29 - type: map_at_3 value: 27.356 - type: map_at_5 value: 28.410999999999998 - type: mrr_at_1 value: 26.074 - type: mrr_at_10 value: 32.002 - type: mrr_at_100 value: 32.838 - type: mrr_at_1000 value: 32.909 - type: mrr_at_3 value: 30.317 - type: mrr_at_5 value: 31.222 - type: ndcg_at_1 value: 26.074 - type: ndcg_at_10 value: 32.975 - type: ndcg_at_100 value: 37.621 - type: ndcg_at_1000 value: 40.253 - type: ndcg_at_3 value: 29.452 - type: ndcg_at_5 value: 31.020999999999997 - type: precision_at_1 value: 26.074 - type: precision_at_10 value: 5.077 - type: precision_at_100 value: 0.8049999999999999 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 12.526000000000002 - type: precision_at_5 value: 8.588999999999999 - type: recall_at_1 value: 23.177 - type: recall_at_10 value: 41.613 - type: recall_at_100 value: 63.287000000000006 - type: recall_at_1000 value: 83.013 - type: recall_at_3 value: 31.783 - type: recall_at_5 value: 35.769 - type: map_at_1 value: 15.856 - type: map_at_10 value: 22.651 - type: map_at_100 value: 23.649 - type: map_at_1000 value: 23.783 - type: map_at_3 value: 20.591 - type: map_at_5 value: 21.684 - type: mrr_at_1 value: 19.408 - type: mrr_at_10 value: 26.51 - type: mrr_at_100 value: 27.356 - type: mrr_at_1000 value: 27.439999999999998 - type: mrr_at_3 value: 24.547 - type: mrr_at_5 value: 25.562 - type: ndcg_at_1 value: 19.408 - type: ndcg_at_10 value: 27.072000000000003 - type: ndcg_at_100 value: 31.980999999999998 - type: ndcg_at_1000 value: 35.167 - type: ndcg_at_3 value: 23.338 - type: ndcg_at_5 value: 24.94 - type: precision_at_1 value: 19.408 - type: precision_at_10 value: 4.9590000000000005 - type: precision_at_100 value: 0.8710000000000001 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 11.138 - type: precision_at_5 value: 7.949000000000001 - type: recall_at_1 value: 15.856 - type: recall_at_10 value: 36.578 - type: recall_at_100 value: 58.89 - type: recall_at_1000 value: 81.743 - type: recall_at_3 value: 25.94 - type: recall_at_5 value: 30.153999999999996 - type: map_at_1 value: 25.892 - type: map_at_10 value: 33.899 - type: map_at_100 value: 34.955000000000005 - type: map_at_1000 value: 35.066 - type: map_at_3 value: 31.41 - type: map_at_5 value: 32.669 - type: mrr_at_1 value: 30.224 - type: mrr_at_10 value: 37.936 - type: mrr_at_100 value: 38.777 - type: mrr_at_1000 value: 38.85 - type: mrr_at_3 value: 35.821 - type: mrr_at_5 value: 36.894 - type: ndcg_at_1 value: 30.224 - type: ndcg_at_10 value: 38.766 - type: ndcg_at_100 value: 43.806 - type: ndcg_at_1000 value: 46.373999999999995 - type: ndcg_at_3 value: 34.325 - type: ndcg_at_5 value: 36.096000000000004 - type: precision_at_1 value: 30.224 - type: precision_at_10 value: 6.446000000000001 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 15.392 - type: precision_at_5 value: 10.671999999999999 - type: recall_at_1 value: 25.892 - type: recall_at_10 value: 49.573 - type: recall_at_100 value: 71.885 - type: recall_at_1000 value: 89.912 - type: recall_at_3 value: 37.226 - type: recall_at_5 value: 41.74 - type: map_at_1 value: 23.915 - type: map_at_10 value: 33.613 - type: map_at_100 value: 35.333999999999996 - type: map_at_1000 value: 35.563 - type: map_at_3 value: 31.203999999999997 - type: map_at_5 value: 32.479 - type: mrr_at_1 value: 29.447000000000003 - type: mrr_at_10 value: 38.440000000000005 - type: mrr_at_100 value: 39.459 - type: mrr_at_1000 value: 39.513999999999996 - type: mrr_at_3 value: 36.495 - type: mrr_at_5 value: 37.592 - type: ndcg_at_1 value: 29.447000000000003 - type: ndcg_at_10 value: 39.341 - type: ndcg_at_100 value: 45.382 - type: ndcg_at_1000 value: 47.921 - type: ndcg_at_3 value: 35.671 - type: ndcg_at_5 value: 37.299 - type: precision_at_1 value: 29.447000000000003 - type: precision_at_10 value: 7.648000000000001 - type: precision_at_100 value: 1.567 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 17.194000000000003 - type: precision_at_5 value: 12.253 - type: recall_at_1 value: 23.915 - type: recall_at_10 value: 49.491 - type: recall_at_100 value: 76.483 - type: recall_at_1000 value: 92.674 - type: recall_at_3 value: 38.878 - type: recall_at_5 value: 43.492 - type: map_at_1 value: 20.283 - type: map_at_10 value: 26.851000000000003 - type: map_at_100 value: 27.973 - type: map_at_1000 value: 28.087 - type: map_at_3 value: 24.887 - type: map_at_5 value: 25.804 - type: mrr_at_1 value: 22.366 - type: mrr_at_10 value: 28.864 - type: mrr_at_100 value: 29.903000000000002 - type: mrr_at_1000 value: 29.988 - type: mrr_at_3 value: 27.017999999999997 - type: mrr_at_5 value: 27.813 - type: ndcg_at_1 value: 22.366 - type: ndcg_at_10 value: 30.898999999999997 - type: ndcg_at_100 value: 36.176 - type: ndcg_at_1000 value: 39.036 - type: ndcg_at_3 value: 26.932000000000002 - type: ndcg_at_5 value: 28.416999999999998 - type: precision_at_1 value: 22.366 - type: precision_at_10 value: 4.824 - type: precision_at_100 value: 0.804 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 11.459999999999999 - type: precision_at_5 value: 7.8740000000000006 - type: recall_at_1 value: 20.283 - type: recall_at_10 value: 41.559000000000005 - type: recall_at_100 value: 65.051 - type: recall_at_1000 value: 86.47200000000001 - type: recall_at_3 value: 30.524 - type: recall_at_5 value: 34.11 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 11.326 - type: map_at_10 value: 19.357 - type: map_at_100 value: 21.014 - type: map_at_1000 value: 21.188000000000002 - type: map_at_3 value: 16.305 - type: map_at_5 value: 17.886 - type: mrr_at_1 value: 24.820999999999998 - type: mrr_at_10 value: 36.150999999999996 - type: mrr_at_100 value: 37.080999999999996 - type: mrr_at_1000 value: 37.123 - type: mrr_at_3 value: 32.952999999999996 - type: mrr_at_5 value: 34.917 - type: ndcg_at_1 value: 24.820999999999998 - type: ndcg_at_10 value: 27.131 - type: ndcg_at_100 value: 33.841 - type: ndcg_at_1000 value: 37.159 - type: ndcg_at_3 value: 22.311 - type: ndcg_at_5 value: 24.026 - type: precision_at_1 value: 24.820999999999998 - type: precision_at_10 value: 8.450000000000001 - type: precision_at_100 value: 1.557 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 16.612 - type: precision_at_5 value: 12.808 - type: recall_at_1 value: 11.326 - type: recall_at_10 value: 32.548 - type: recall_at_100 value: 55.803000000000004 - type: recall_at_1000 value: 74.636 - type: recall_at_3 value: 20.549 - type: recall_at_5 value: 25.514 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 7.481 - type: map_at_10 value: 15.043999999999999 - type: map_at_100 value: 20.194000000000003 - type: map_at_1000 value: 21.423000000000002 - type: map_at_3 value: 11.238 - type: map_at_5 value: 12.828999999999999 - type: mrr_at_1 value: 54.50000000000001 - type: mrr_at_10 value: 64.713 - type: mrr_at_100 value: 65.216 - type: mrr_at_1000 value: 65.23 - type: mrr_at_3 value: 62.74999999999999 - type: mrr_at_5 value: 63.87500000000001 - type: ndcg_at_1 value: 43.375 - type: ndcg_at_10 value: 32.631 - type: ndcg_at_100 value: 36.338 - type: ndcg_at_1000 value: 43.541000000000004 - type: ndcg_at_3 value: 36.746 - type: ndcg_at_5 value: 34.419 - type: precision_at_1 value: 54.50000000000001 - type: precision_at_10 value: 24.825 - type: precision_at_100 value: 7.698 - type: precision_at_1000 value: 1.657 - type: precision_at_3 value: 38.917 - type: precision_at_5 value: 32.35 - type: recall_at_1 value: 7.481 - type: recall_at_10 value: 20.341 - type: recall_at_100 value: 41.778 - type: recall_at_1000 value: 64.82 - type: recall_at_3 value: 12.748000000000001 - type: recall_at_5 value: 15.507000000000001 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.580000000000005 - type: f1 value: 41.5149462395095 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 61.683 - type: map_at_10 value: 73.071 - type: map_at_100 value: 73.327 - type: map_at_1000 value: 73.341 - type: map_at_3 value: 71.446 - type: map_at_5 value: 72.557 - type: mrr_at_1 value: 66.44200000000001 - type: mrr_at_10 value: 77.725 - type: mrr_at_100 value: 77.89399999999999 - type: mrr_at_1000 value: 77.898 - type: mrr_at_3 value: 76.283 - type: mrr_at_5 value: 77.29700000000001 - type: ndcg_at_1 value: 66.44200000000001 - type: ndcg_at_10 value: 78.43 - type: ndcg_at_100 value: 79.462 - type: ndcg_at_1000 value: 79.754 - type: ndcg_at_3 value: 75.53800000000001 - type: ndcg_at_5 value: 77.332 - type: precision_at_1 value: 66.44200000000001 - type: precision_at_10 value: 9.878 - type: precision_at_100 value: 1.051 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 29.878 - type: precision_at_5 value: 18.953 - type: recall_at_1 value: 61.683 - type: recall_at_10 value: 90.259 - type: recall_at_100 value: 94.633 - type: recall_at_1000 value: 96.60499999999999 - type: recall_at_3 value: 82.502 - type: recall_at_5 value: 86.978 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 17.724 - type: map_at_10 value: 29.487999999999996 - type: map_at_100 value: 31.243 - type: map_at_1000 value: 31.419999999999998 - type: map_at_3 value: 25.612000000000002 - type: map_at_5 value: 27.859 - type: mrr_at_1 value: 35.802 - type: mrr_at_10 value: 44.684000000000005 - type: mrr_at_100 value: 45.578 - type: mrr_at_1000 value: 45.621 - type: mrr_at_3 value: 42.361 - type: mrr_at_5 value: 43.85 - type: ndcg_at_1 value: 35.802 - type: ndcg_at_10 value: 37.009 - type: ndcg_at_100 value: 43.903 - type: ndcg_at_1000 value: 47.019 - type: ndcg_at_3 value: 33.634 - type: ndcg_at_5 value: 34.965 - type: precision_at_1 value: 35.802 - type: precision_at_10 value: 10.386 - type: precision_at_100 value: 1.7309999999999999 - type: precision_at_1000 value: 0.231 - type: precision_at_3 value: 22.84 - type: precision_at_5 value: 17.037 - type: recall_at_1 value: 17.724 - type: recall_at_10 value: 43.708000000000006 - type: recall_at_100 value: 69.902 - type: recall_at_1000 value: 88.51 - type: recall_at_3 value: 30.740000000000002 - type: recall_at_5 value: 36.742000000000004 - task: type: Clustering dataset: name: MTEB FloresClusteringS2S type: jinaai/flores_clustering config: default split: test revision: 480b580487f53a46f881354a8348335d4edbb2de metrics: - type: v_measure value: 39.79120149869612 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 34.801 - type: map_at_10 value: 50.42100000000001 - type: map_at_100 value: 51.254 - type: map_at_1000 value: 51.327999999999996 - type: map_at_3 value: 47.56 - type: map_at_5 value: 49.379 - type: mrr_at_1 value: 69.602 - type: mrr_at_10 value: 76.385 - type: mrr_at_100 value: 76.668 - type: mrr_at_1000 value: 76.683 - type: mrr_at_3 value: 75.102 - type: mrr_at_5 value: 75.949 - type: ndcg_at_1 value: 69.602 - type: ndcg_at_10 value: 59.476 - type: ndcg_at_100 value: 62.527 - type: ndcg_at_1000 value: 64.043 - type: ndcg_at_3 value: 55.155 - type: ndcg_at_5 value: 57.623000000000005 - type: precision_at_1 value: 69.602 - type: precision_at_10 value: 12.292 - type: precision_at_100 value: 1.467 - type: precision_at_1000 value: 0.167 - type: precision_at_3 value: 34.634 - type: precision_at_5 value: 22.728 - type: recall_at_1 value: 34.801 - type: recall_at_10 value: 61.458 - type: recall_at_100 value: 73.363 - type: recall_at_1000 value: 83.43 - type: recall_at_3 value: 51.951 - type: recall_at_5 value: 56.82000000000001 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 67.46079999999999 - type: ap value: 61.81278199159353 - type: f1 value: 67.26505019954826 - task: type: Reranking dataset: name: MTEB MIRACL type: jinaai/miracl config: default split: test revision: d28a029f35c4ff7f616df47b0edf54e6882395e6 metrics: - type: map value: 73.90464144118539 - type: mrr value: 82.44674693216022 - task: type: Retrieval dataset: name: MTEB MIRACLRetrieval type: jinaai/miracl config: default split: test revision: None metrics: - type: map_at_1 value: 21.299 - type: map_at_10 value: 70.547 - type: map_at_100 value: 72.394 - type: map_at_1000 value: 72.39999999999999 - type: map_at_3 value: 41.317 - type: map_at_5 value: 53.756 - type: mrr_at_1 value: 72.84 - type: mrr_at_10 value: 82.466 - type: mrr_at_100 value: 82.52199999999999 - type: mrr_at_1000 value: 82.52199999999999 - type: mrr_at_3 value: 80.607 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.994 - type: ndcg_at_10 value: 80.89 - type: ndcg_at_100 value: 83.30199999999999 - type: ndcg_at_1000 value: 83.337 - type: ndcg_at_3 value: 70.357 - type: ndcg_at_5 value: 72.529 - type: precision_at_1 value: 72.994 - type: precision_at_10 value: 43.056 - type: precision_at_100 value: 4.603 - type: precision_at_1000 value: 0.461 - type: precision_at_3 value: 61.626000000000005 - type: precision_at_5 value: 55.525000000000006 - type: recall_at_1 value: 21.299 - type: recall_at_10 value: 93.903 - type: recall_at_100 value: 99.86699999999999 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 46.653 - type: recall_at_5 value: 65.72200000000001 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.37163702690378 - type: f1 value: 90.18615216514222 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.88992661774515 - type: f1 value: 89.3738963046966 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.97218422252622 - type: f1 value: 54.03096570916335 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 68.75917278185457 - type: f1 value: 49.144083814705844 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.75991930060525 - type: f1 value: 69.37993796176502 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.93006052454606 - type: f1 value: 66.04029135274683 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.81977135171486 - type: f1 value: 74.10477122507747 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.23402824478816 - type: f1 value: 71.75572665880296 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.189750849969215 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.78357393555938 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.605612998328358 - type: mrr value: 31.595529205695833 - task: type: Retrieval dataset: name: MTEB MintakaESRetrieval type: jinaai/mintakaqa config: default split: test revision: None metrics: - type: map_at_1 value: 16.213 - type: map_at_10 value: 24.079 - type: map_at_100 value: 25.039 - type: map_at_1000 value: 25.142999999999997 - type: map_at_3 value: 21.823 - type: map_at_5 value: 23.069 - type: mrr_at_1 value: 16.213 - type: mrr_at_10 value: 24.079 - type: mrr_at_100 value: 25.039 - type: mrr_at_1000 value: 25.142999999999997 - type: mrr_at_3 value: 21.823 - type: mrr_at_5 value: 23.069 - type: ndcg_at_1 value: 16.213 - type: ndcg_at_10 value: 28.315 - type: ndcg_at_100 value: 33.475 - type: ndcg_at_1000 value: 36.838 - type: ndcg_at_3 value: 23.627000000000002 - type: ndcg_at_5 value: 25.879 - type: precision_at_1 value: 16.213 - type: precision_at_10 value: 4.183 - type: precision_at_100 value: 0.6709999999999999 - type: precision_at_1000 value: 0.095 - type: precision_at_3 value: 9.612 - type: precision_at_5 value: 6.865 - type: recall_at_1 value: 16.213 - type: recall_at_10 value: 41.832 - type: recall_at_100 value: 67.12 - type: recall_at_1000 value: 94.843 - type: recall_at_3 value: 28.837000000000003 - type: recall_at_5 value: 34.323 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.692 - type: map_at_10 value: 10.783 - type: map_at_100 value: 13.447999999999999 - type: map_at_1000 value: 14.756 - type: map_at_3 value: 7.646 - type: map_at_5 value: 9.311 - type: mrr_at_1 value: 42.415000000000006 - type: mrr_at_10 value: 50.471 - type: mrr_at_100 value: 51.251999999999995 - type: mrr_at_1000 value: 51.292 - type: mrr_at_3 value: 48.4 - type: mrr_at_5 value: 49.809 - type: ndcg_at_1 value: 40.867 - type: ndcg_at_10 value: 30.303 - type: ndcg_at_100 value: 27.915 - type: ndcg_at_1000 value: 36.734 - type: ndcg_at_3 value: 35.74 - type: ndcg_at_5 value: 33.938 - type: precision_at_1 value: 42.415000000000006 - type: precision_at_10 value: 22.105 - type: precision_at_100 value: 7.173 - type: precision_at_1000 value: 2.007 - type: precision_at_3 value: 33.437 - type: precision_at_5 value: 29.349999999999998 - type: recall_at_1 value: 4.692 - type: recall_at_10 value: 14.798 - type: recall_at_100 value: 28.948 - type: recall_at_1000 value: 59.939 - type: recall_at_3 value: 8.562 - type: recall_at_5 value: 11.818 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 27.572999999999997 - type: map_at_10 value: 42.754 - type: map_at_100 value: 43.8 - type: map_at_1000 value: 43.838 - type: map_at_3 value: 38.157000000000004 - type: map_at_5 value: 40.9 - type: mrr_at_1 value: 31.373 - type: mrr_at_10 value: 45.321 - type: mrr_at_100 value: 46.109 - type: mrr_at_1000 value: 46.135 - type: mrr_at_3 value: 41.483 - type: mrr_at_5 value: 43.76 - type: ndcg_at_1 value: 31.373 - type: ndcg_at_10 value: 50.7 - type: ndcg_at_100 value: 55.103 - type: ndcg_at_1000 value: 55.955999999999996 - type: ndcg_at_3 value: 42.069 - type: ndcg_at_5 value: 46.595 - type: precision_at_1 value: 31.373 - type: precision_at_10 value: 8.601 - type: precision_at_100 value: 1.11 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 19.399 - type: precision_at_5 value: 14.224 - type: recall_at_1 value: 27.572999999999997 - type: recall_at_10 value: 72.465 - type: recall_at_100 value: 91.474 - type: recall_at_1000 value: 97.78099999999999 - type: recall_at_3 value: 50.087 - type: recall_at_5 value: 60.516000000000005 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.525 - type: map_at_10 value: 84.417 - type: map_at_100 value: 85.07000000000001 - type: map_at_1000 value: 85.085 - type: map_at_3 value: 81.45 - type: map_at_5 value: 83.317 - type: mrr_at_1 value: 81.17999999999999 - type: mrr_at_10 value: 87.34100000000001 - type: mrr_at_100 value: 87.461 - type: mrr_at_1000 value: 87.46199999999999 - type: mrr_at_3 value: 86.372 - type: mrr_at_5 value: 87.046 - type: ndcg_at_1 value: 81.17999999999999 - type: ndcg_at_10 value: 88.144 - type: ndcg_at_100 value: 89.424 - type: ndcg_at_1000 value: 89.517 - type: ndcg_at_3 value: 85.282 - type: ndcg_at_5 value: 86.874 - type: precision_at_1 value: 81.17999999999999 - type: precision_at_10 value: 13.385 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.29 - type: precision_at_5 value: 24.546 - type: recall_at_1 value: 70.525 - type: recall_at_10 value: 95.22500000000001 - type: recall_at_100 value: 99.572 - type: recall_at_1000 value: 99.98899999999999 - type: recall_at_3 value: 87.035 - type: recall_at_5 value: 91.526 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 48.284384328108736 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 56.02508021518392 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.046 - type: map_at_100 value: 11.802999999999999 - type: map_at_1000 value: 12.074 - type: map_at_3 value: 7.071 - type: map_at_5 value: 8.556 - type: mrr_at_1 value: 19.8 - type: mrr_at_10 value: 30.105999999999998 - type: mrr_at_100 value: 31.16 - type: mrr_at_1000 value: 31.224 - type: mrr_at_3 value: 26.633000000000003 - type: mrr_at_5 value: 28.768 - type: ndcg_at_1 value: 19.8 - type: ndcg_at_10 value: 17.358 - type: ndcg_at_100 value: 24.566 - type: ndcg_at_1000 value: 29.653000000000002 - type: ndcg_at_3 value: 16.052 - type: ndcg_at_5 value: 14.325 - type: precision_at_1 value: 19.8 - type: precision_at_10 value: 9.07 - type: precision_at_100 value: 1.955 - type: precision_at_1000 value: 0.318 - type: precision_at_3 value: 14.933 - type: precision_at_5 value: 12.68 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.398 - type: recall_at_100 value: 39.683 - type: recall_at_1000 value: 64.625 - type: recall_at_3 value: 9.113 - type: recall_at_5 value: 12.873000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 87.90508618312852 - type: cos_sim_spearman value: 83.01323463129205 - type: euclidean_pearson value: 84.35845059002891 - type: euclidean_spearman value: 82.85508559018527 - type: manhattan_pearson value: 84.3682368950498 - type: manhattan_spearman value: 82.8619728517302 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 89.28294535873366 - type: cos_sim_spearman value: 81.61879268131732 - type: euclidean_pearson value: 85.99053604863724 - type: euclidean_spearman value: 80.95176684739084 - type: manhattan_pearson value: 85.98054086663903 - type: manhattan_spearman value: 80.9911070430335 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 86.15898098455258 - type: cos_sim_spearman value: 86.8247985072307 - type: euclidean_pearson value: 86.25342429918649 - type: euclidean_spearman value: 87.13468603023252 - type: manhattan_pearson value: 86.2006134067688 - type: manhattan_spearman value: 87.06135811996896 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.57403998481877 - type: cos_sim_spearman value: 83.55947075172618 - type: euclidean_pearson value: 84.97097562965358 - type: euclidean_spearman value: 83.6287075601467 - type: manhattan_pearson value: 84.87092197104133 - type: manhattan_spearman value: 83.53783891641335 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.14632780204231 - type: cos_sim_spearman value: 88.74903634923868 - type: euclidean_pearson value: 88.03922995855112 - type: euclidean_spearman value: 88.72852190525855 - type: manhattan_pearson value: 87.9694791024271 - type: manhattan_spearman value: 88.66461452107418 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.75989818558652 - type: cos_sim_spearman value: 86.03107893122942 - type: euclidean_pearson value: 85.21908960133018 - type: euclidean_spearman value: 85.93012720153482 - type: manhattan_pearson value: 85.1969170195502 - type: manhattan_spearman value: 85.8975254197784 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.16803898789955 - type: cos_sim_spearman value: 88.56139047950525 - type: euclidean_pearson value: 88.09685325747859 - type: euclidean_spearman value: 88.0457609458947 - type: manhattan_pearson value: 88.07054413001431 - type: manhattan_spearman value: 88.10784098889314 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.7160384474547 - type: cos_sim_spearman value: 86.4899235500562 - type: euclidean_pearson value: 85.90854477703468 - type: euclidean_spearman value: 86.16085009124498 - type: manhattan_pearson value: 85.9249735317884 - type: manhattan_spearman value: 86.25038421339116 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.37914622360788 - type: cos_sim_spearman value: 88.24619159322809 - type: euclidean_pearson value: 89.00538382632769 - type: euclidean_spearman value: 88.44675863524736 - type: manhattan_pearson value: 88.97372120683606 - type: manhattan_spearman value: 88.33509324222129 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 66.22181360203069 - type: cos_sim_spearman value: 65.6218291833768 - type: euclidean_pearson value: 67.14543788822508 - type: euclidean_spearman value: 65.21269939987857 - type: manhattan_pearson value: 67.03304607195636 - type: manhattan_spearman value: 65.18885316423805 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 65.71694059677084 - type: cos_sim_spearman value: 67.96591844540954 - type: euclidean_pearson value: 65.6964079162296 - type: euclidean_spearman value: 67.53027948900173 - type: manhattan_pearson value: 65.93545097673741 - type: manhattan_spearman value: 67.7261811805062 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 75.43544796375058 - type: cos_sim_spearman value: 78.80462701160789 - type: euclidean_pearson value: 76.19135575163138 - type: euclidean_spearman value: 78.4974732597096 - type: manhattan_pearson value: 76.3254742699264 - type: manhattan_spearman value: 78.51884307690416 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.46805293607684 - type: cos_sim_spearman value: 87.83792784689113 - type: euclidean_pearson value: 87.3872143683234 - type: euclidean_spearman value: 87.61611384542778 - type: manhattan_pearson value: 87.38542672601992 - type: manhattan_spearman value: 87.61423971087297 - task: type: STS dataset: name: MTEB STSES type: PlanTL-GOB-ES/sts-es config: default split: test revision: 0912bb6c9393c76d62a7c5ee81c4c817ff47c9f4 metrics: - type: cos_sim_pearson value: 82.55286866116202 - type: cos_sim_spearman value: 80.22150503320272 - type: euclidean_pearson value: 83.27223445187087 - type: euclidean_spearman value: 80.59078590992925 - type: manhattan_pearson value: 83.23095887013197 - type: manhattan_spearman value: 80.87994285189795 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.29717302265792 - type: mrr value: 94.02156304117088 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 49.9 - type: map_at_10 value: 58.626 - type: map_at_100 value: 59.519999999999996 - type: map_at_1000 value: 59.55200000000001 - type: map_at_3 value: 56.232000000000006 - type: map_at_5 value: 57.833 - type: mrr_at_1 value: 52.333 - type: mrr_at_10 value: 60.039 - type: mrr_at_100 value: 60.732 - type: mrr_at_1000 value: 60.75899999999999 - type: mrr_at_3 value: 58.278 - type: mrr_at_5 value: 59.428000000000004 - type: ndcg_at_1 value: 52.333 - type: ndcg_at_10 value: 62.67 - type: ndcg_at_100 value: 66.465 - type: ndcg_at_1000 value: 67.425 - type: ndcg_at_3 value: 58.711999999999996 - type: ndcg_at_5 value: 60.958999999999996 - type: precision_at_1 value: 52.333 - type: precision_at_10 value: 8.333 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 22.778000000000002 - type: precision_at_5 value: 15.267 - type: recall_at_1 value: 49.9 - type: recall_at_10 value: 73.394 - type: recall_at_100 value: 90.43299999999999 - type: recall_at_1000 value: 98.167 - type: recall_at_3 value: 63.032999999999994 - type: recall_at_5 value: 68.444 - task: type: Clustering dataset: name: MTEB SpanishNewsClusteringP2P type: jinaai/spanish_news_clustering config: default split: test revision: b5edc3d3d7c12c7b9f883e9da50f6732f3624142 metrics: - type: v_measure value: 48.30543557796266 - task: type: Retrieval dataset: name: MTEB SpanishPassageRetrievalS2P type: jinaai/spanish_passage_retrieval config: default split: test revision: None metrics: - type: map_at_1 value: 14.443 - type: map_at_10 value: 28.736 - type: map_at_100 value: 34.514 - type: map_at_1000 value: 35.004000000000005 - type: map_at_3 value: 20.308 - type: map_at_5 value: 25.404 - type: mrr_at_1 value: 50.29900000000001 - type: mrr_at_10 value: 63.757 - type: mrr_at_100 value: 64.238 - type: mrr_at_1000 value: 64.24600000000001 - type: mrr_at_3 value: 59.480999999999995 - type: mrr_at_5 value: 62.924 - type: ndcg_at_1 value: 50.29900000000001 - type: ndcg_at_10 value: 42.126999999999995 - type: ndcg_at_100 value: 57.208000000000006 - type: ndcg_at_1000 value: 60.646 - type: ndcg_at_3 value: 38.722 - type: ndcg_at_5 value: 40.007999999999996 - type: precision_at_1 value: 50.29900000000001 - type: precision_at_10 value: 19.82 - type: precision_at_100 value: 4.82 - type: precision_at_1000 value: 0.5910000000000001 - type: precision_at_3 value: 31.537 - type: precision_at_5 value: 28.262999999999998 - type: recall_at_1 value: 14.443 - type: recall_at_10 value: 43.885999999999996 - type: recall_at_100 value: 85.231 - type: recall_at_1000 value: 99.07000000000001 - type: recall_at_3 value: 22.486 - type: recall_at_5 value: 33.035 - type: map_at_1 value: 15.578 - type: map_at_10 value: 52.214000000000006 - type: map_at_100 value: 64.791 - type: map_at_1000 value: 64.791 - type: map_at_3 value: 33.396 - type: map_at_5 value: 41.728 - type: mrr_at_1 value: 73.653 - type: mrr_at_10 value: 85.116 - type: mrr_at_100 value: 85.205 - type: mrr_at_1000 value: 85.205 - type: mrr_at_3 value: 84.631 - type: mrr_at_5 value: 85.05 - type: ndcg_at_1 value: 76.64699999999999 - type: ndcg_at_10 value: 70.38600000000001 - type: ndcg_at_100 value: 82.27600000000001 - type: ndcg_at_1000 value: 82.27600000000001 - type: ndcg_at_3 value: 70.422 - type: ndcg_at_5 value: 69.545 - type: precision_at_1 value: 76.64699999999999 - type: precision_at_10 value: 43.653 - type: precision_at_100 value: 7.718999999999999 - type: precision_at_1000 value: 0.772 - type: precision_at_3 value: 64.671 - type: precision_at_5 value: 56.766000000000005 - type: recall_at_1 value: 15.578 - type: recall_at_10 value: 67.459 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 36.922 - type: recall_at_5 value: 49.424 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81683168316832 - type: cos_sim_ap value: 95.61502659412484 - type: cos_sim_f1 value: 90.6813627254509 - type: cos_sim_precision value: 90.86345381526104 - type: cos_sim_recall value: 90.5 - type: dot_accuracy value: 99.8039603960396 - type: dot_ap value: 95.36783483182609 - type: dot_f1 value: 89.90825688073394 - type: dot_precision value: 91.68399168399168 - type: dot_recall value: 88.2 - type: euclidean_accuracy value: 99.81188118811882 - type: euclidean_ap value: 95.51583052324564 - type: euclidean_f1 value: 90.46214355948868 - type: euclidean_precision value: 88.97485493230174 - type: euclidean_recall value: 92.0 - type: manhattan_accuracy value: 99.8079207920792 - type: manhattan_ap value: 95.44030644653718 - type: manhattan_f1 value: 90.37698412698413 - type: manhattan_precision value: 89.66535433070865 - type: manhattan_recall value: 91.10000000000001 - type: max_accuracy value: 99.81683168316832 - type: max_ap value: 95.61502659412484 - type: max_f1 value: 90.6813627254509 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 55.39046705023096 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.57429225651293 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.17622570658746 - type: mrr value: 50.99844293778118 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.97416289382191 - type: cos_sim_spearman value: 29.871890597161432 - type: dot_pearson value: 28.768845892613644 - type: dot_spearman value: 28.872458999448686 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22599999999999998 - type: map_at_10 value: 1.646 - type: map_at_100 value: 9.491 - type: map_at_1000 value: 23.75 - type: map_at_3 value: 0.588 - type: map_at_5 value: 0.9129999999999999 - type: mrr_at_1 value: 84.0 - type: mrr_at_10 value: 89.889 - type: mrr_at_100 value: 89.889 - type: mrr_at_1000 value: 89.889 - type: mrr_at_3 value: 89.667 - type: mrr_at_5 value: 89.667 - type: ndcg_at_1 value: 75.0 - type: ndcg_at_10 value: 67.368 - type: ndcg_at_100 value: 52.834 - type: ndcg_at_1000 value: 49.144 - type: ndcg_at_3 value: 72.866 - type: ndcg_at_5 value: 70.16 - type: precision_at_1 value: 84.0 - type: precision_at_10 value: 71.8 - type: precision_at_100 value: 54.04 - type: precision_at_1000 value: 21.709999999999997 - type: precision_at_3 value: 77.333 - type: precision_at_5 value: 74.0 - type: recall_at_1 value: 0.22599999999999998 - type: recall_at_10 value: 1.9029999999999998 - type: recall_at_100 value: 13.012 - type: recall_at_1000 value: 46.105000000000004 - type: recall_at_3 value: 0.63 - type: recall_at_5 value: 1.0030000000000001 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.5 - type: map_at_10 value: 8.193999999999999 - type: map_at_100 value: 14.01 - type: map_at_1000 value: 15.570999999999998 - type: map_at_3 value: 4.361000000000001 - type: map_at_5 value: 5.9270000000000005 - type: mrr_at_1 value: 16.326999999999998 - type: mrr_at_10 value: 33.326 - type: mrr_at_100 value: 34.592 - type: mrr_at_1000 value: 34.592 - type: mrr_at_3 value: 29.252 - type: mrr_at_5 value: 30.680000000000003 - type: ndcg_at_1 value: 15.306000000000001 - type: ndcg_at_10 value: 19.819 - type: ndcg_at_100 value: 33.428000000000004 - type: ndcg_at_1000 value: 45.024 - type: ndcg_at_3 value: 19.667 - type: ndcg_at_5 value: 19.625 - type: precision_at_1 value: 16.326999999999998 - type: precision_at_10 value: 18.367 - type: precision_at_100 value: 7.367 - type: precision_at_1000 value: 1.496 - type: precision_at_3 value: 23.128999999999998 - type: precision_at_5 value: 21.633 - type: recall_at_1 value: 1.5 - type: recall_at_10 value: 14.362 - type: recall_at_100 value: 45.842 - type: recall_at_1000 value: 80.42 - type: recall_at_3 value: 5.99 - type: recall_at_5 value: 8.701 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.04740000000001 - type: ap value: 13.58661943759992 - type: f1 value: 53.727487131754195 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.06395019807584 - type: f1 value: 61.36753664680866 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 40.19881263066229 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.19401561661799 - type: cos_sim_ap value: 71.62462506173092 - type: cos_sim_f1 value: 66.0641327225455 - type: cos_sim_precision value: 62.234662934453 - type: cos_sim_recall value: 70.3957783641161 - type: dot_accuracy value: 84.69333015437802 - type: dot_ap value: 69.83805526490895 - type: dot_f1 value: 64.85446235265817 - type: dot_precision value: 59.59328028293546 - type: dot_recall value: 71.13456464379946 - type: euclidean_accuracy value: 85.38475293556655 - type: euclidean_ap value: 72.05594596250286 - type: euclidean_f1 value: 66.53543307086615 - type: euclidean_precision value: 62.332872291378514 - type: euclidean_recall value: 71.34564643799473 - type: manhattan_accuracy value: 85.3907134767837 - type: manhattan_ap value: 72.04585410650152 - type: manhattan_f1 value: 66.57132642116554 - type: manhattan_precision value: 60.704194740273856 - type: manhattan_recall value: 73.6939313984169 - type: max_accuracy value: 85.3907134767837 - type: max_ap value: 72.05594596250286 - type: max_f1 value: 66.57132642116554 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.30414871735165 - type: cos_sim_ap value: 86.4398673359918 - type: cos_sim_f1 value: 78.9243598692186 - type: cos_sim_precision value: 75.47249350101876 - type: cos_sim_recall value: 82.7071142593163 - type: dot_accuracy value: 89.26145845461248 - type: dot_ap value: 86.32172118414802 - type: dot_f1 value: 78.8277467755645 - type: dot_precision value: 75.79418662497335 - type: dot_recall value: 82.11425931629196 - type: euclidean_accuracy value: 89.24205378973105 - type: euclidean_ap value: 86.23988673522649 - type: euclidean_f1 value: 78.67984857951413 - type: euclidean_precision value: 75.2689684269742 - type: euclidean_recall value: 82.41453649522637 - type: manhattan_accuracy value: 89.18189932859859 - type: manhattan_ap value: 86.21003833972824 - type: manhattan_f1 value: 78.70972564850115 - type: manhattan_precision value: 76.485544094145 - type: manhattan_recall value: 81.0671388974438 - type: max_accuracy value: 89.30414871735165 - type: max_ap value: 86.4398673359918 - type: max_f1 value: 78.9243598692186 - task: type: Clustering dataset: name: MTEB WikiCitiesClustering type: jinaai/cities_wiki_clustering config: default split: test revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa metrics: - type: v_measure value: 73.254610626148 - task: type: Retrieval dataset: name: MTEB XMarketES type: jinaai/xmarket_ml config: default split: test revision: 705db869e8107dfe6e34b832af90446e77d813e3 metrics: - type: map_at_1 value: 5.506 - type: map_at_10 value: 11.546 - type: map_at_100 value: 14.299999999999999 - type: map_at_1000 value: 15.146999999999998 - type: map_at_3 value: 8.748000000000001 - type: map_at_5 value: 10.036000000000001 - type: mrr_at_1 value: 17.902 - type: mrr_at_10 value: 25.698999999999998 - type: mrr_at_100 value: 26.634 - type: mrr_at_1000 value: 26.704 - type: mrr_at_3 value: 23.244999999999997 - type: mrr_at_5 value: 24.555 - type: ndcg_at_1 value: 17.902 - type: ndcg_at_10 value: 19.714000000000002 - type: ndcg_at_100 value: 25.363000000000003 - type: ndcg_at_1000 value: 30.903999999999996 - type: ndcg_at_3 value: 17.884 - type: ndcg_at_5 value: 18.462 - type: precision_at_1 value: 17.902 - type: precision_at_10 value: 10.467 - type: precision_at_100 value: 3.9699999999999998 - type: precision_at_1000 value: 1.1320000000000001 - type: precision_at_3 value: 14.387 - type: precision_at_5 value: 12.727 - type: recall_at_1 value: 5.506 - type: recall_at_10 value: 19.997999999999998 - type: recall_at_100 value: 42.947 - type: recall_at_1000 value: 67.333 - type: recall_at_3 value: 11.158 - type: recall_at_5 value: 14.577000000000002 - task: type: Retrieval dataset: name: MTEB XPQAESRetrieval type: jinaai/xpqa config: default split: test revision: None metrics: - type: map_at_1 value: 32.53 - type: map_at_10 value: 58.68600000000001 - type: map_at_100 value: 60.45399999999999 - type: map_at_1000 value: 60.51499999999999 - type: map_at_3 value: 50.356 - type: map_at_5 value: 55.98 - type: mrr_at_1 value: 61.791 - type: mrr_at_10 value: 68.952 - type: mrr_at_100 value: 69.524 - type: mrr_at_1000 value: 69.538 - type: mrr_at_3 value: 67.087 - type: mrr_at_5 value: 68.052 - type: ndcg_at_1 value: 61.791 - type: ndcg_at_10 value: 65.359 - type: ndcg_at_100 value: 70.95700000000001 - type: ndcg_at_1000 value: 71.881 - type: ndcg_at_3 value: 59.999 - type: ndcg_at_5 value: 61.316 - type: precision_at_1 value: 61.791 - type: precision_at_10 value: 18.184 - type: precision_at_100 value: 2.317 - type: precision_at_1000 value: 0.245 - type: precision_at_3 value: 42.203 - type: precision_at_5 value: 31.374999999999996 - type: recall_at_1 value: 32.53 - type: recall_at_10 value: 73.098 - type: recall_at_100 value: 94.029 - type: recall_at_1000 value: 99.842 - type: recall_at_3 value: 54.525 - type: recall_at_5 value: 63.796 --- <!-- TODO: add evaluation results here --> <br><br> <p align="center"> <img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b> </p> ## Quick Start The easiest way to starting using `jina-embeddings-v2-base-es` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/). ## Intended Usage & Model Info `jina-embeddings-v2-base-es` is a Spanish/English bilingual text **embedding model** supporting **8192 sequence length**. It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. We have designed it for high performance in mono-lingual & cross-lingual applications and trained it specifically to support mixed Spanish-English input without bias. Additionally, we provide the following embedding models: `jina-embeddings-v2-base-es` es un modelo (embedding) de texto bilingüe Inglés/Español que admite una longitud de secuencia de 8192. Se basa en la arquitectura BERT (JinaBERT) que incorpora la variante bi-direccional simétrica de [ALiBi](https://arxiv.org/abs/2108.12409) para permitir una mayor longitud de secuencia. Hemos diseñado este modelo para un alto rendimiento en aplicaciones monolingües y bilingües, y está entrenando específicamente para admitir entradas mixtas de español e inglés sin sesgo. Adicionalmente, proporcionamos los siguientes modelos (embeddings): - [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters. - [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters. - [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): Chinese-English Bilingual embeddings. - [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): German-English Bilingual embeddings. - [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings **(you are here)**. ## Data & Parameters The data and training details are described in this [technical report](https://arxiv.org/abs/2402.17016) ## Usage **<details><summary>Please apply mean pooling when integrating the model.</summary>** <p> ### Why mean pooling? `mean pooling` takes all token embeddings from model output and averaging them at sentence/paragraph level. It has been proved to be the most effective way to produce high-quality sentence embeddings. We offer an `encode` function to deal with this. However, if you would like to do it without using the default `encode` function: ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['How is the weather today?', 'What is the current weather like today?'] tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-es') model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-es', trust_remote_code=True) encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) ``` </p> </details> You can use Jina Embedding models directly from the `transformers` package: ```python !pip install transformers from transformers import AutoModel from numpy.linalg import norm cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-es', trust_remote_code=True) # trust_remote_code is needed to use the encode method embeddings = model.encode(['How is the weather today?', '¿Qué tiempo hace hoy?']) print(cos_sim(embeddings[0], embeddings[1])) ``` If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function: ```python embeddings = model.encode( ['Very long ... document'], max_length=2048 ) ``` Or you can use the model with the `sentence-transformers` package: ```python from sentence_transformers import SentenceTransformer, util model = SentenceTransformer("jinaai/jina-embeddings-v2-base-es", trust_remote_code=True) embeddings = model.encode(['How is the weather today?', '¿Qué tiempo hace hoy?']) print(util.cos_sim(embeddings[0], embeddings[1])) ``` And if you only want to handle shorter sequence, such as 2k, then you can set the `model.max_seq_length` ```python model.max_seq_length = 2048 ``` ## Alternatives to Transformers and Sentence Transformers 1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/). 2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy). ## Use Jina Embeddings for RAG According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83), > In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out. <img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px"> ## Plans 1. Bilingual embedding models supporting more European & Asian languages, including French, Italian and Japanese. 2. Multimodal embedding models enable Multimodal RAG applications. 3. High-performt rerankers. ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` @article{mohr2024multi, title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings}, author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{\"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{\'\i}nez, Joan Fontanals and Wang, Feng and others}, journal={arXiv preprint arXiv:2402.17016}, year={2024} } ```
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
<!-- TODO: add evaluation results here --> <br><br> <p align="center"> <img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b> </p> ## Quick Start The easiest way to starting using `jina-embeddings-v2-base-es` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/). ## Intended Usage & Model Info `jina-embeddings-v2-base-es` is a Spanish/English bilingual text **embedding model** supporting **8192 sequence length**. It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. We have designed it for high performance in mono-lingual & cross-lingual applications and trained it specifically to support mixed Spanish-English input without bias. Additionally, we provide the following embedding models: `jina-embeddings-v2-base-es` es un modelo (embedding) de texto bilingüe Inglés/Español que admite una longitud de secuencia de 8192. Se basa en la arquitectura BERT (JinaBERT) que incorpora la variante bi-direccional simétrica de [ALiBi](https://arxiv.org/abs/2108.12409) para permitir una mayor longitud de secuencia. Hemos diseñado este modelo para un alto rendimiento en aplicaciones monolingües y bilingües, y está entrenando específicamente para admitir entradas mixtas de español e inglés sin sesgo. Adicionalmente, proporcionamos los siguientes modelos (embeddings): - [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters. - [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters. - [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): Chinese-English Bilingual embeddings. - [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): German-English Bilingual embeddings. - [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings **(you are here)**. ## Data & Parameters The data and training details are described in this [technical report](https://arxiv.org/abs/2402.17016) ## Usage **<details><summary>Please apply mean pooling when integrating the model.</summary>** <p> ### Why mean pooling? `mean pooling` takes all token embeddings from model output and averaging them at sentence/paragraph level. It has been proved to be the most effective way to produce high-quality sentence embeddings. We offer an `encode` function to deal with this. However, if you would like to do it without using the default `encode` function: ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['How is the weather today?', 'What is the current weather like today?'] tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-es') model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-es', trust_remote_code=True) encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) ``` </p> </details> You can use Jina Embedding models directly from the `transformers` package: ```python !pip install transformers from transformers import AutoModel from numpy.linalg import norm cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-es', trust_remote_code=True) # trust_remote_code is needed to use the encode method embeddings = model.encode(['How is the weather today?', '¿Qué tiempo hace hoy?']) print(cos_sim(embeddings[0], embeddings[1])) ``` If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function: ```python embeddings = model.encode( ['Very long ... document'], max_length=2048 ) ``` Or you can use the model with the `sentence-transformers` package: ```python from sentence_transformers import SentenceTransformer, util model = SentenceTransformer("jinaai/jina-embeddings-v2-base-es", trust_remote_code=True) embeddings = model.encode(['How is the weather today?', '¿Qué tiempo hace hoy?']) print(util.cos_sim(embeddings[0], embeddings[1])) ``` And if you only want to handle shorter sequence, such as 2k, then you can set the `model.max_seq_length` ```python model.max_seq_length = 2048 ``` ## Alternatives to Transformers and Sentence Transformers 1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/). 2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy). ## Use Jina Embeddings for RAG According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83), > In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out. <img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px"> ## Plans 1. Bilingual embedding models supporting more European & Asian languages, including French, Italian and Japanese. 2. Multimodal embedding models enable Multimodal RAG applications. 3. High-performt rerankers. ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` @article{mohr2024multi, title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings}, author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{\"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{\'\i}nez, Joan Fontanals and Wang, Feng and others}, journal={arXiv preprint arXiv:2402.17016}, year={2024} } ```
{"language": ["es", "en"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "inference": false, "model-index": [{"name": "jina-embeddings-v2-base-es", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 74.25373134328358}, {"type": "ap", "value": 37.05201236793268}, {"type": "f1", "value": 68.16770391201077}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 78.30885}, {"type": "ap", "value": 73.01622441156408}, {"type": "f1", "value": 78.20769284466313}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 38.324}, {"type": "f1", "value": 37.89543008761673}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (es)", "type": "mteb/amazon_reviews_multi", "config": "es", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 38.678000000000004}, {"type": "f1", "value": 38.122639506976}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 23.968999999999998}, {"type": "map_at_10", "value": 40.691}, {"type": "map_at_100", "value": 41.713}, {"type": "map_at_1000", "value": 41.719}, {"type": "map_at_3", "value": 35.42}, {"type": "map_at_5", "value": 38.442}, {"type": "mrr_at_1", "value": 24.395}, {"type": "mrr_at_10", "value": 40.853}, {"type": "mrr_at_100", "value": 41.869}, {"type": "mrr_at_1000", "value": 41.874}, {"type": "mrr_at_3", "value": 35.68}, {"type": "mrr_at_5", "value": 38.572}, {"type": "ndcg_at_1", "value": 23.968999999999998}, {"type": "ndcg_at_10", "value": 50.129999999999995}, {"type": "ndcg_at_100", "value": 54.364000000000004}, {"type": "ndcg_at_1000", "value": 54.494}, {"type": "ndcg_at_3", "value": 39.231}, {"type": "ndcg_at_5", "value": 44.694}, {"type": "precision_at_1", "value": 23.968999999999998}, {"type": "precision_at_10", "value": 8.036999999999999}, {"type": "precision_at_100", "value": 0.9860000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 16.761}, {"type": "precision_at_5", "value": 12.717}, {"type": "recall_at_1", "value": 23.968999999999998}, {"type": "recall_at_10", "value": 80.36999999999999}, {"type": "recall_at_100", "value": 98.578}, {"type": "recall_at_1000", "value": 99.57300000000001}, {"type": "recall_at_3", "value": 50.28399999999999}, {"type": "recall_at_5", "value": 63.585}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 41.54886683150053}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 32.186028697637234}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 61.19432643698725}, {"type": "mrr", "value": 75.28646176845622}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.3828259381228}, {"type": "cos_sim_spearman", "value": 83.04647058342209}, {"type": "euclidean_pearson", "value": 84.02895346096244}, {"type": "euclidean_spearman", "value": 82.34524978635342}, {"type": "manhattan_pearson", "value": 84.35030723233426}, {"type": "manhattan_spearman", "value": 83.17177464337936}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 85.25649350649351}, {"type": "f1", "value": 85.22320474023192}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BigPatentClustering", "type": "jinaai/big-patent-clustering", "config": "default", "split": "test", "revision": "62d5330920bca426ce9d3c76ea914f15fc83e891"}, "metrics": [{"type": "v_measure", "value": 20.42929408254094}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 35.165318177498136}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 28.89030154229562}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 30.119}, {"type": "map_at_10", "value": 42.092}, {"type": "map_at_100", "value": 43.506}, {"type": "map_at_1000", "value": 43.631}, {"type": "map_at_3", "value": 38.373000000000005}, {"type": "map_at_5", "value": 40.501}, {"type": "mrr_at_1", "value": 38.196999999999996}, {"type": "mrr_at_10", "value": 48.237}, {"type": "mrr_at_100", "value": 48.914}, {"type": "mrr_at_1000", "value": 48.959}, {"type": "mrr_at_3", "value": 45.279}, {"type": "mrr_at_5", "value": 47.11}, {"type": "ndcg_at_1", "value": 38.196999999999996}, {"type": "ndcg_at_10", "value": 48.849}, {"type": "ndcg_at_100", "value": 53.713}, {"type": "ndcg_at_1000", "value": 55.678000000000004}, {"type": "ndcg_at_3", "value": 43.546}, {"type": "ndcg_at_5", "value": 46.009}, {"type": "precision_at_1", "value": 38.196999999999996}, {"type": "precision_at_10", "value": 9.642000000000001}, {"type": "precision_at_100", "value": 1.5190000000000001}, {"type": "precision_at_1000", "value": 0.199}, {"type": "precision_at_3", "value": 21.65}, {"type": "precision_at_5", "value": 15.708}, {"type": "recall_at_1", "value": 30.119}, {"type": "recall_at_10", "value": 61.788}, {"type": "recall_at_100", "value": 82.14399999999999}, {"type": "recall_at_1000", "value": 95.003}, {"type": "recall_at_3", "value": 45.772}, {"type": "recall_at_5", "value": 53.04600000000001}, {"type": "map_at_1", "value": 28.979}, {"type": "map_at_10", "value": 37.785000000000004}, {"type": "map_at_100", "value": 38.945}, {"type": "map_at_1000", "value": 39.071}, {"type": "map_at_3", "value": 35.083999999999996}, {"type": "map_at_5", "value": 36.571999999999996}, {"type": "mrr_at_1", "value": 36.242000000000004}, {"type": "mrr_at_10", "value": 43.552}, {"type": "mrr_at_100", "value": 44.228}, {"type": "mrr_at_1000", "value": 44.275999999999996}, {"type": "mrr_at_3", "value": 41.359}, {"type": "mrr_at_5", "value": 42.598}, {"type": "ndcg_at_1", "value": 36.242000000000004}, {"type": "ndcg_at_10", "value": 42.94}, {"type": "ndcg_at_100", "value": 47.343}, {"type": "ndcg_at_1000", "value": 49.538}, {"type": "ndcg_at_3", "value": 39.086999999999996}, {"type": "ndcg_at_5", "value": 40.781}, {"type": "precision_at_1", "value": 36.242000000000004}, {"type": "precision_at_10", "value": 7.954999999999999}, {"type": "precision_at_100", "value": 1.303}, {"type": "precision_at_1000", "value": 0.178}, {"type": "precision_at_3", "value": 18.556}, {"type": "precision_at_5", "value": 13.145999999999999}, {"type": "recall_at_1", "value": 28.979}, {"type": "recall_at_10", "value": 51.835}, {"type": "recall_at_100", "value": 70.47}, {"type": "recall_at_1000", "value": 84.68299999999999}, {"type": "recall_at_3", "value": 40.410000000000004}, {"type": "recall_at_5", "value": 45.189}, {"type": "map_at_1", "value": 37.878}, {"type": "map_at_10", "value": 49.903}, {"type": "map_at_100", "value": 50.797000000000004}, {"type": "map_at_1000", "value": 50.858000000000004}, {"type": "map_at_3", "value": 46.526}, {"type": "map_at_5", "value": 48.615}, {"type": "mrr_at_1", "value": 43.135}, {"type": "mrr_at_10", "value": 53.067}, {"type": "mrr_at_100", "value": 53.668000000000006}, {"type": "mrr_at_1000", "value": 53.698}, {"type": "mrr_at_3", "value": 50.449}, {"type": "mrr_at_5", "value": 52.117000000000004}, {"type": "ndcg_at_1", "value": 43.135}, {"type": "ndcg_at_10", "value": 55.641}, {"type": "ndcg_at_100", "value": 59.427}, {"type": "ndcg_at_1000", "value": 60.655}, {"type": "ndcg_at_3", "value": 49.969}, {"type": "ndcg_at_5", "value": 53.075}, {"type": "precision_at_1", "value": 43.135}, {"type": "precision_at_10", "value": 8.997}, {"type": "precision_at_100", "value": 1.1809999999999998}, {"type": "precision_at_1000", "value": 0.133}, {"type": "precision_at_3", "value": 22.215}, {"type": "precision_at_5", "value": 15.586}, {"type": "recall_at_1", "value": 37.878}, {"type": "recall_at_10", "value": 69.405}, {"type": "recall_at_100", "value": 86.262}, {"type": "recall_at_1000", "value": 95.012}, {"type": "recall_at_3", "value": 54.458}, {"type": "recall_at_5", "value": 61.965}, {"type": "map_at_1", "value": 24.853}, {"type": "map_at_10", "value": 32.402}, {"type": "map_at_100", "value": 33.417}, {"type": "map_at_1000", "value": 33.498}, {"type": "map_at_3", "value": 30.024}, {"type": "map_at_5", "value": 31.407}, {"type": "mrr_at_1", "value": 26.667}, {"type": "mrr_at_10", "value": 34.399}, {"type": "mrr_at_100", "value": 35.284}, {"type": "mrr_at_1000", "value": 35.345}, {"type": "mrr_at_3", "value": 32.109}, {"type": "mrr_at_5", "value": 33.375}, {"type": "ndcg_at_1", "value": 26.667}, {"type": "ndcg_at_10", "value": 36.854}, {"type": "ndcg_at_100", "value": 42.196}, {"type": "ndcg_at_1000", "value": 44.303}, {"type": "ndcg_at_3", "value": 32.186}, {"type": "ndcg_at_5", "value": 34.512}, {"type": "precision_at_1", "value": 26.667}, {"type": "precision_at_10", "value": 5.559}, {"type": "precision_at_100", "value": 0.88}, {"type": "precision_at_1000", "value": 0.109}, {"type": "precision_at_3", "value": 13.333}, {"type": "precision_at_5", "value": 9.379}, {"type": "recall_at_1", "value": 24.853}, {"type": "recall_at_10", "value": 48.636}, {"type": "recall_at_100", "value": 73.926}, {"type": "recall_at_1000", "value": 89.94}, {"type": "recall_at_3", "value": 36.266}, {"type": "recall_at_5", "value": 41.723}, {"type": "map_at_1", "value": 14.963999999999999}, {"type": "map_at_10", "value": 22.591}, {"type": "map_at_100", "value": 23.735999999999997}, {"type": "map_at_1000", "value": 23.868000000000002}, {"type": "map_at_3", "value": 20.093}, {"type": "map_at_5", "value": 21.499}, {"type": "mrr_at_1", "value": 18.407999999999998}, {"type": "mrr_at_10", "value": 26.863}, {"type": "mrr_at_100", "value": 27.87}, {"type": "mrr_at_1000", "value": 27.947}, {"type": "mrr_at_3", "value": 24.254}, {"type": "mrr_at_5", "value": 25.784000000000002}, {"type": "ndcg_at_1", "value": 18.407999999999998}, {"type": "ndcg_at_10", "value": 27.549}, {"type": "ndcg_at_100", "value": 33.188}, {"type": "ndcg_at_1000", "value": 36.312}, {"type": "ndcg_at_3", "value": 22.862}, {"type": "ndcg_at_5", "value": 25.130999999999997}, {"type": "precision_at_1", "value": 18.407999999999998}, {"type": "precision_at_10", "value": 5.087}, {"type": "precision_at_100", "value": 0.923}, {"type": "precision_at_1000", "value": 0.133}, {"type": "precision_at_3", "value": 10.987}, {"type": "precision_at_5", "value": 8.209}, {"type": "recall_at_1", "value": 14.963999999999999}, {"type": "recall_at_10", "value": 38.673}, {"type": "recall_at_100", "value": 63.224999999999994}, {"type": "recall_at_1000", "value": 85.443}, {"type": "recall_at_3", "value": 25.840000000000003}, {"type": "recall_at_5", "value": 31.503999999999998}, {"type": "map_at_1", "value": 27.861000000000004}, {"type": "map_at_10", "value": 37.562}, {"type": "map_at_100", "value": 38.906}, {"type": "map_at_1000", "value": 39.021}, {"type": "map_at_3", "value": 34.743}, {"type": "map_at_5", "value": 36.168}, {"type": "mrr_at_1", "value": 34.455999999999996}, {"type": "mrr_at_10", "value": 43.428}, {"type": "mrr_at_100", "value": 44.228}, {"type": "mrr_at_1000", "value": 44.278}, {"type": "mrr_at_3", "value": 41.001}, {"type": "mrr_at_5", "value": 42.315000000000005}, {"type": "ndcg_at_1", "value": 34.455999999999996}, {"type": "ndcg_at_10", "value": 43.477}, {"type": "ndcg_at_100", "value": 48.953}, {"type": "ndcg_at_1000", "value": 51.19200000000001}, {"type": "ndcg_at_3", "value": 38.799}, {"type": "ndcg_at_5", "value": 40.743}, {"type": "precision_at_1", "value": 34.455999999999996}, {"type": "precision_at_10", "value": 7.902000000000001}, {"type": "precision_at_100", "value": 1.244}, {"type": "precision_at_1000", "value": 0.161}, {"type": "precision_at_3", "value": 18.511}, {"type": "precision_at_5", "value": 12.859000000000002}, {"type": "recall_at_1", "value": 27.861000000000004}, {"type": "recall_at_10", "value": 55.36}, {"type": "recall_at_100", "value": 78.384}, {"type": "recall_at_1000", "value": 93.447}, {"type": "recall_at_3", "value": 41.926}, {"type": "recall_at_5", "value": 47.257}, {"type": "map_at_1", "value": 26.375}, {"type": "map_at_10", "value": 35.571000000000005}, {"type": "map_at_100", "value": 36.785000000000004}, {"type": "map_at_1000", "value": 36.905}, {"type": "map_at_3", "value": 32.49}, {"type": "map_at_5", "value": 34.123999999999995}, {"type": "mrr_at_1", "value": 32.647999999999996}, {"type": "mrr_at_10", "value": 40.598}, {"type": "mrr_at_100", "value": 41.484}, {"type": "mrr_at_1000", "value": 41.546}, {"type": "mrr_at_3", "value": 37.9}, {"type": "mrr_at_5", "value": 39.401}, {"type": "ndcg_at_1", "value": 32.647999999999996}, {"type": "ndcg_at_10", "value": 41.026}, {"type": "ndcg_at_100", "value": 46.365}, {"type": "ndcg_at_1000", "value": 48.876}, {"type": "ndcg_at_3", "value": 35.843}, {"type": "ndcg_at_5", "value": 38.118}, {"type": "precision_at_1", "value": 32.647999999999996}, {"type": "precision_at_10", "value": 7.443}, {"type": "precision_at_100", "value": 1.18}, {"type": "precision_at_1000", "value": 0.158}, {"type": "precision_at_3", "value": 16.819}, {"type": "precision_at_5", "value": 11.985999999999999}, {"type": "recall_at_1", "value": 26.375}, {"type": "recall_at_10", "value": 52.471000000000004}, {"type": "recall_at_100", "value": 75.354}, {"type": "recall_at_1000", "value": 92.35}, {"type": "recall_at_3", "value": 37.893}, {"type": "recall_at_5", "value": 43.935}, {"type": "map_at_1", "value": 25.012666666666668}, {"type": "map_at_10", "value": 33.685833333333335}, {"type": "map_at_100", "value": 34.849250000000005}, {"type": "map_at_1000", "value": 34.970083333333335}, {"type": "map_at_3", "value": 31.065083333333334}, {"type": "map_at_5", "value": 32.494416666666666}, {"type": "mrr_at_1", "value": 29.772666666666662}, {"type": "mrr_at_10", "value": 37.824666666666666}, {"type": "mrr_at_100", "value": 38.66741666666666}, {"type": "mrr_at_1000", "value": 38.72916666666666}, {"type": "mrr_at_3", "value": 35.54575}, {"type": "mrr_at_5", "value": 36.81524999999999}, {"type": "ndcg_at_1", "value": 29.772666666666662}, {"type": "ndcg_at_10", "value": 38.78241666666666}, {"type": "ndcg_at_100", "value": 43.84591666666667}, {"type": "ndcg_at_1000", "value": 46.275416666666665}, {"type": "ndcg_at_3", "value": 34.33416666666667}, {"type": "ndcg_at_5", "value": 36.345166666666664}, {"type": "precision_at_1", "value": 29.772666666666662}, {"type": "precision_at_10", "value": 6.794916666666667}, {"type": "precision_at_100", "value": 1.106416666666667}, {"type": "precision_at_1000", "value": 0.15033333333333335}, {"type": "precision_at_3", "value": 15.815083333333336}, {"type": "precision_at_5", "value": 11.184166666666664}, {"type": "recall_at_1", "value": 25.012666666666668}, {"type": "recall_at_10", "value": 49.748500000000014}, {"type": "recall_at_100", "value": 72.11341666666667}, {"type": "recall_at_1000", "value": 89.141}, {"type": "recall_at_3", "value": 37.242999999999995}, {"type": "recall_at_5", "value": 42.49033333333333}, {"type": "map_at_1", "value": 23.177}, {"type": "map_at_10", "value": 29.310000000000002}, {"type": "map_at_100", "value": 30.188}, {"type": "map_at_1000", "value": 30.29}, {"type": "map_at_3", "value": 27.356}, {"type": "map_at_5", "value": 28.410999999999998}, {"type": "mrr_at_1", "value": 26.074}, {"type": "mrr_at_10", "value": 32.002}, {"type": "mrr_at_100", "value": 32.838}, {"type": "mrr_at_1000", "value": 32.909}, {"type": "mrr_at_3", "value": 30.317}, {"type": "mrr_at_5", "value": 31.222}, {"type": "ndcg_at_1", "value": 26.074}, {"type": "ndcg_at_10", "value": 32.975}, {"type": "ndcg_at_100", "value": 37.621}, {"type": "ndcg_at_1000", "value": 40.253}, {"type": "ndcg_at_3", "value": 29.452}, {"type": "ndcg_at_5", "value": 31.020999999999997}, {"type": "precision_at_1", "value": 26.074}, {"type": "precision_at_10", "value": 5.077}, {"type": "precision_at_100", "value": 0.8049999999999999}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 12.526000000000002}, {"type": "precision_at_5", "value": 8.588999999999999}, {"type": "recall_at_1", "value": 23.177}, {"type": "recall_at_10", "value": 41.613}, {"type": "recall_at_100", "value": 63.287000000000006}, {"type": "recall_at_1000", "value": 83.013}, {"type": "recall_at_3", "value": 31.783}, {"type": "recall_at_5", "value": 35.769}, {"type": "map_at_1", "value": 15.856}, {"type": "map_at_10", "value": 22.651}, {"type": "map_at_100", "value": 23.649}, {"type": "map_at_1000", "value": 23.783}, {"type": "map_at_3", "value": 20.591}, {"type": "map_at_5", "value": 21.684}, {"type": "mrr_at_1", "value": 19.408}, {"type": "mrr_at_10", "value": 26.51}, {"type": "mrr_at_100", "value": 27.356}, {"type": "mrr_at_1000", "value": 27.439999999999998}, {"type": "mrr_at_3", "value": 24.547}, {"type": "mrr_at_5", "value": 25.562}, {"type": "ndcg_at_1", "value": 19.408}, {"type": "ndcg_at_10", "value": 27.072000000000003}, {"type": "ndcg_at_100", "value": 31.980999999999998}, {"type": "ndcg_at_1000", "value": 35.167}, {"type": "ndcg_at_3", "value": 23.338}, {"type": "ndcg_at_5", "value": 24.94}, {"type": "precision_at_1", "value": 19.408}, {"type": "precision_at_10", "value": 4.9590000000000005}, {"type": "precision_at_100", "value": 0.8710000000000001}, {"type": "precision_at_1000", "value": 0.132}, {"type": "precision_at_3", "value": 11.138}, {"type": "precision_at_5", "value": 7.949000000000001}, {"type": "recall_at_1", "value": 15.856}, {"type": "recall_at_10", "value": 36.578}, {"type": "recall_at_100", "value": 58.89}, {"type": "recall_at_1000", "value": 81.743}, {"type": "recall_at_3", "value": 25.94}, {"type": "recall_at_5", "value": 30.153999999999996}, {"type": "map_at_1", "value": 25.892}, {"type": "map_at_10", "value": 33.899}, {"type": "map_at_100", "value": 34.955000000000005}, {"type": "map_at_1000", "value": 35.066}, {"type": "map_at_3", "value": 31.41}, {"type": "map_at_5", "value": 32.669}, {"type": "mrr_at_1", "value": 30.224}, {"type": "mrr_at_10", "value": 37.936}, {"type": "mrr_at_100", "value": 38.777}, {"type": "mrr_at_1000", "value": 38.85}, {"type": "mrr_at_3", "value": 35.821}, {"type": "mrr_at_5", "value": 36.894}, {"type": "ndcg_at_1", "value": 30.224}, {"type": "ndcg_at_10", "value": 38.766}, {"type": "ndcg_at_100", "value": 43.806}, {"type": "ndcg_at_1000", "value": 46.373999999999995}, {"type": "ndcg_at_3", "value": 34.325}, {"type": "ndcg_at_5", "value": 36.096000000000004}, {"type": "precision_at_1", "value": 30.224}, {"type": "precision_at_10", "value": 6.446000000000001}, {"type": "precision_at_100", "value": 1.0}, {"type": "precision_at_1000", "value": 0.133}, {"type": "precision_at_3", "value": 15.392}, {"type": "precision_at_5", "value": 10.671999999999999}, {"type": "recall_at_1", "value": 25.892}, {"type": "recall_at_10", "value": 49.573}, {"type": "recall_at_100", "value": 71.885}, {"type": "recall_at_1000", "value": 89.912}, {"type": "recall_at_3", "value": 37.226}, {"type": "recall_at_5", "value": 41.74}, {"type": "map_at_1", "value": 23.915}, {"type": "map_at_10", "value": 33.613}, {"type": "map_at_100", "value": 35.333999999999996}, {"type": "map_at_1000", "value": 35.563}, {"type": "map_at_3", "value": 31.203999999999997}, {"type": "map_at_5", "value": 32.479}, {"type": "mrr_at_1", "value": 29.447000000000003}, {"type": "mrr_at_10", "value": 38.440000000000005}, {"type": "mrr_at_100", "value": 39.459}, {"type": "mrr_at_1000", "value": 39.513999999999996}, {"type": "mrr_at_3", "value": 36.495}, {"type": "mrr_at_5", "value": 37.592}, {"type": "ndcg_at_1", "value": 29.447000000000003}, {"type": "ndcg_at_10", "value": 39.341}, {"type": "ndcg_at_100", "value": 45.382}, {"type": "ndcg_at_1000", "value": 47.921}, {"type": "ndcg_at_3", "value": 35.671}, {"type": "ndcg_at_5", "value": 37.299}, {"type": "precision_at_1", "value": 29.447000000000003}, {"type": "precision_at_10", "value": 7.648000000000001}, {"type": "precision_at_100", "value": 1.567}, {"type": "precision_at_1000", "value": 0.241}, {"type": "precision_at_3", "value": 17.194000000000003}, {"type": "precision_at_5", "value": 12.253}, {"type": "recall_at_1", "value": 23.915}, {"type": "recall_at_10", "value": 49.491}, {"type": "recall_at_100", "value": 76.483}, {"type": "recall_at_1000", "value": 92.674}, {"type": "recall_at_3", "value": 38.878}, {"type": "recall_at_5", "value": 43.492}, {"type": "map_at_1", "value": 20.283}, {"type": "map_at_10", "value": 26.851000000000003}, {"type": "map_at_100", "value": 27.973}, {"type": "map_at_1000", "value": 28.087}, {"type": "map_at_3", "value": 24.887}, {"type": "map_at_5", "value": 25.804}, {"type": "mrr_at_1", "value": 22.366}, {"type": "mrr_at_10", "value": 28.864}, {"type": "mrr_at_100", "value": 29.903000000000002}, {"type": "mrr_at_1000", "value": 29.988}, {"type": "mrr_at_3", "value": 27.017999999999997}, {"type": "mrr_at_5", "value": 27.813}, {"type": "ndcg_at_1", "value": 22.366}, {"type": "ndcg_at_10", "value": 30.898999999999997}, {"type": "ndcg_at_100", "value": 36.176}, {"type": "ndcg_at_1000", "value": 39.036}, {"type": "ndcg_at_3", "value": 26.932000000000002}, {"type": "ndcg_at_5", "value": 28.416999999999998}, {"type": "precision_at_1", "value": 22.366}, {"type": "precision_at_10", "value": 4.824}, {"type": "precision_at_100", "value": 0.804}, {"type": "precision_at_1000", "value": 0.116}, {"type": "precision_at_3", "value": 11.459999999999999}, {"type": "precision_at_5", "value": 7.8740000000000006}, {"type": "recall_at_1", "value": 20.283}, {"type": "recall_at_10", "value": 41.559000000000005}, {"type": "recall_at_100", "value": 65.051}, {"type": "recall_at_1000", "value": 86.47200000000001}, {"type": "recall_at_3", "value": 30.524}, {"type": "recall_at_5", "value": 34.11}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 11.326}, {"type": "map_at_10", "value": 19.357}, {"type": "map_at_100", "value": 21.014}, {"type": "map_at_1000", "value": 21.188000000000002}, {"type": "map_at_3", "value": 16.305}, {"type": "map_at_5", "value": 17.886}, {"type": "mrr_at_1", "value": 24.820999999999998}, {"type": "mrr_at_10", "value": 36.150999999999996}, {"type": "mrr_at_100", "value": 37.080999999999996}, {"type": "mrr_at_1000", "value": 37.123}, {"type": "mrr_at_3", "value": 32.952999999999996}, {"type": "mrr_at_5", "value": 34.917}, {"type": "ndcg_at_1", "value": 24.820999999999998}, {"type": "ndcg_at_10", "value": 27.131}, {"type": "ndcg_at_100", "value": 33.841}, {"type": "ndcg_at_1000", "value": 37.159}, {"type": "ndcg_at_3", "value": 22.311}, {"type": "ndcg_at_5", "value": 24.026}, {"type": "precision_at_1", "value": 24.820999999999998}, {"type": "precision_at_10", "value": 8.450000000000001}, {"type": "precision_at_100", "value": 1.557}, {"type": "precision_at_1000", "value": 0.218}, {"type": "precision_at_3", "value": 16.612}, {"type": "precision_at_5", "value": 12.808}, {"type": "recall_at_1", "value": 11.326}, {"type": "recall_at_10", "value": 32.548}, {"type": "recall_at_100", "value": 55.803000000000004}, {"type": "recall_at_1000", "value": 74.636}, {"type": "recall_at_3", "value": 20.549}, {"type": "recall_at_5", "value": 25.514}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 7.481}, {"type": "map_at_10", "value": 15.043999999999999}, {"type": "map_at_100", "value": 20.194000000000003}, {"type": "map_at_1000", "value": 21.423000000000002}, {"type": "map_at_3", "value": 11.238}, {"type": "map_at_5", "value": 12.828999999999999}, {"type": "mrr_at_1", "value": 54.50000000000001}, {"type": "mrr_at_10", "value": 64.713}, {"type": "mrr_at_100", "value": 65.216}, {"type": "mrr_at_1000", "value": 65.23}, {"type": "mrr_at_3", "value": 62.74999999999999}, {"type": "mrr_at_5", "value": 63.87500000000001}, {"type": "ndcg_at_1", "value": 43.375}, {"type": "ndcg_at_10", "value": 32.631}, {"type": "ndcg_at_100", "value": 36.338}, {"type": "ndcg_at_1000", "value": 43.541000000000004}, {"type": "ndcg_at_3", "value": 36.746}, {"type": "ndcg_at_5", "value": 34.419}, {"type": "precision_at_1", "value": 54.50000000000001}, {"type": "precision_at_10", "value": 24.825}, {"type": "precision_at_100", "value": 7.698}, {"type": "precision_at_1000", "value": 1.657}, {"type": "precision_at_3", "value": 38.917}, {"type": "precision_at_5", "value": 32.35}, {"type": "recall_at_1", "value": 7.481}, {"type": "recall_at_10", "value": 20.341}, {"type": "recall_at_100", "value": 41.778}, {"type": "recall_at_1000", "value": 64.82}, {"type": "recall_at_3", "value": 12.748000000000001}, {"type": "recall_at_5", "value": 15.507000000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 46.580000000000005}, {"type": "f1", "value": 41.5149462395095}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 61.683}, {"type": "map_at_10", "value": 73.071}, {"type": "map_at_100", "value": 73.327}, {"type": "map_at_1000", "value": 73.341}, {"type": "map_at_3", "value": 71.446}, {"type": "map_at_5", "value": 72.557}, {"type": "mrr_at_1", "value": 66.44200000000001}, {"type": "mrr_at_10", "value": 77.725}, {"type": "mrr_at_100", "value": 77.89399999999999}, {"type": "mrr_at_1000", "value": 77.898}, {"type": "mrr_at_3", "value": 76.283}, {"type": "mrr_at_5", "value": 77.29700000000001}, {"type": "ndcg_at_1", "value": 66.44200000000001}, {"type": "ndcg_at_10", "value": 78.43}, {"type": "ndcg_at_100", "value": 79.462}, {"type": "ndcg_at_1000", "value": 79.754}, {"type": "ndcg_at_3", "value": 75.53800000000001}, {"type": "ndcg_at_5", "value": 77.332}, {"type": "precision_at_1", "value": 66.44200000000001}, {"type": "precision_at_10", "value": 9.878}, {"type": "precision_at_100", "value": 1.051}, {"type": "precision_at_1000", "value": 0.109}, {"type": "precision_at_3", "value": 29.878}, {"type": "precision_at_5", "value": 18.953}, {"type": "recall_at_1", "value": 61.683}, {"type": "recall_at_10", "value": 90.259}, {"type": "recall_at_100", "value": 94.633}, {"type": "recall_at_1000", "value": 96.60499999999999}, {"type": "recall_at_3", "value": 82.502}, {"type": "recall_at_5", "value": 86.978}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 17.724}, {"type": "map_at_10", "value": 29.487999999999996}, {"type": "map_at_100", "value": 31.243}, {"type": "map_at_1000", "value": 31.419999999999998}, {"type": "map_at_3", "value": 25.612000000000002}, {"type": "map_at_5", "value": 27.859}, {"type": "mrr_at_1", "value": 35.802}, {"type": "mrr_at_10", "value": 44.684000000000005}, {"type": "mrr_at_100", "value": 45.578}, {"type": "mrr_at_1000", "value": 45.621}, {"type": "mrr_at_3", "value": 42.361}, {"type": "mrr_at_5", "value": 43.85}, {"type": "ndcg_at_1", "value": 35.802}, {"type": "ndcg_at_10", "value": 37.009}, {"type": "ndcg_at_100", "value": 43.903}, {"type": "ndcg_at_1000", "value": 47.019}, {"type": "ndcg_at_3", "value": 33.634}, {"type": "ndcg_at_5", "value": 34.965}, {"type": "precision_at_1", "value": 35.802}, {"type": "precision_at_10", "value": 10.386}, {"type": "precision_at_100", "value": 1.7309999999999999}, {"type": "precision_at_1000", "value": 0.231}, {"type": "precision_at_3", "value": 22.84}, {"type": "precision_at_5", "value": 17.037}, {"type": "recall_at_1", "value": 17.724}, {"type": "recall_at_10", "value": 43.708000000000006}, {"type": "recall_at_100", "value": 69.902}, {"type": "recall_at_1000", "value": 88.51}, {"type": "recall_at_3", "value": 30.740000000000002}, {"type": "recall_at_5", "value": 36.742000000000004}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB FloresClusteringS2S", "type": "jinaai/flores_clustering", "config": "default", "split": "test", "revision": "480b580487f53a46f881354a8348335d4edbb2de"}, "metrics": [{"type": "v_measure", "value": 39.79120149869612}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 34.801}, {"type": "map_at_10", "value": 50.42100000000001}, {"type": "map_at_100", "value": 51.254}, {"type": "map_at_1000", "value": 51.327999999999996}, {"type": "map_at_3", "value": 47.56}, {"type": "map_at_5", "value": 49.379}, {"type": "mrr_at_1", "value": 69.602}, {"type": "mrr_at_10", "value": 76.385}, {"type": "mrr_at_100", "value": 76.668}, {"type": "mrr_at_1000", "value": 76.683}, {"type": "mrr_at_3", "value": 75.102}, {"type": "mrr_at_5", "value": 75.949}, {"type": "ndcg_at_1", "value": 69.602}, {"type": "ndcg_at_10", "value": 59.476}, {"type": "ndcg_at_100", "value": 62.527}, {"type": "ndcg_at_1000", "value": 64.043}, {"type": "ndcg_at_3", "value": 55.155}, {"type": "ndcg_at_5", "value": 57.623000000000005}, {"type": "precision_at_1", "value": 69.602}, {"type": "precision_at_10", "value": 12.292}, {"type": "precision_at_100", "value": 1.467}, {"type": "precision_at_1000", "value": 0.167}, {"type": "precision_at_3", "value": 34.634}, {"type": "precision_at_5", "value": 22.728}, {"type": "recall_at_1", "value": 34.801}, {"type": "recall_at_10", "value": 61.458}, {"type": "recall_at_100", "value": 73.363}, {"type": "recall_at_1000", "value": 83.43}, {"type": "recall_at_3", "value": 51.951}, {"type": "recall_at_5", "value": 56.82000000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 67.46079999999999}, {"type": "ap", "value": 61.81278199159353}, {"type": "f1", "value": 67.26505019954826}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MIRACL", "type": "jinaai/miracl", "config": "default", "split": "test", "revision": "d28a029f35c4ff7f616df47b0edf54e6882395e6"}, "metrics": [{"type": "map", "value": 73.90464144118539}, {"type": "mrr", "value": 82.44674693216022}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MIRACLRetrieval", "type": "jinaai/miracl", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.299}, {"type": "map_at_10", "value": 70.547}, {"type": "map_at_100", "value": 72.394}, {"type": "map_at_1000", "value": 72.39999999999999}, {"type": "map_at_3", "value": 41.317}, {"type": "map_at_5", "value": 53.756}, {"type": "mrr_at_1", "value": 72.84}, {"type": "mrr_at_10", "value": 82.466}, {"type": "mrr_at_100", "value": 82.52199999999999}, {"type": "mrr_at_1000", "value": 82.52199999999999}, {"type": "mrr_at_3", "value": 80.607}, {"type": "mrr_at_5", "value": 82.065}, {"type": "ndcg_at_1", "value": 72.994}, {"type": "ndcg_at_10", "value": 80.89}, {"type": "ndcg_at_100", "value": 83.30199999999999}, {"type": "ndcg_at_1000", "value": 83.337}, {"type": "ndcg_at_3", "value": 70.357}, {"type": "ndcg_at_5", "value": 72.529}, {"type": "precision_at_1", "value": 72.994}, {"type": "precision_at_10", "value": 43.056}, {"type": "precision_at_100", "value": 4.603}, {"type": "precision_at_1000", "value": 0.461}, {"type": "precision_at_3", "value": 61.626000000000005}, {"type": "precision_at_5", "value": 55.525000000000006}, {"type": "recall_at_1", "value": 21.299}, {"type": "recall_at_10", "value": 93.903}, {"type": "recall_at_100", "value": 99.86699999999999}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 46.653}, {"type": "recall_at_5", "value": 65.72200000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 90.37163702690378}, {"type": "f1", "value": 90.18615216514222}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (es)", "type": "mteb/mtop_domain", "config": "es", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 89.88992661774515}, {"type": "f1", "value": 89.3738963046966}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 71.97218422252622}, {"type": "f1", "value": 54.03096570916335}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (es)", "type": "mteb/mtop_intent", "config": "es", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 68.75917278185457}, {"type": "f1", "value": 49.144083814705844}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 70.75991930060525}, {"type": "f1", "value": 69.37993796176502}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (es)", "type": "mteb/amazon_massive_intent", "config": "es", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 66.93006052454606}, {"type": "f1", "value": 66.04029135274683}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 73.81977135171486}, {"type": "f1", "value": 74.10477122507747}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (es)", "type": "mteb/amazon_massive_scenario", "config": "es", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.23402824478816}, {"type": "f1", "value": 71.75572665880296}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 32.189750849969215}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 28.78357393555938}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 30.605612998328358}, {"type": "mrr", "value": 31.595529205695833}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MintakaESRetrieval", "type": "jinaai/mintakaqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 16.213}, {"type": "map_at_10", "value": 24.079}, {"type": "map_at_100", "value": 25.039}, {"type": "map_at_1000", "value": 25.142999999999997}, {"type": "map_at_3", "value": 21.823}, {"type": "map_at_5", "value": 23.069}, {"type": "mrr_at_1", "value": 16.213}, {"type": "mrr_at_10", "value": 24.079}, {"type": "mrr_at_100", "value": 25.039}, {"type": "mrr_at_1000", "value": 25.142999999999997}, {"type": "mrr_at_3", "value": 21.823}, {"type": "mrr_at_5", "value": 23.069}, {"type": "ndcg_at_1", "value": 16.213}, {"type": "ndcg_at_10", "value": 28.315}, {"type": "ndcg_at_100", "value": 33.475}, {"type": "ndcg_at_1000", "value": 36.838}, {"type": "ndcg_at_3", "value": 23.627000000000002}, {"type": "ndcg_at_5", "value": 25.879}, {"type": "precision_at_1", "value": 16.213}, {"type": "precision_at_10", "value": 4.183}, {"type": "precision_at_100", "value": 0.6709999999999999}, {"type": "precision_at_1000", "value": 0.095}, {"type": "precision_at_3", "value": 9.612}, {"type": "precision_at_5", "value": 6.865}, {"type": "recall_at_1", "value": 16.213}, {"type": "recall_at_10", "value": 41.832}, {"type": "recall_at_100", "value": 67.12}, {"type": "recall_at_1000", "value": 94.843}, {"type": "recall_at_3", "value": 28.837000000000003}, {"type": "recall_at_5", "value": 34.323}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.692}, {"type": "map_at_10", "value": 10.783}, {"type": "map_at_100", "value": 13.447999999999999}, {"type": "map_at_1000", "value": 14.756}, {"type": "map_at_3", "value": 7.646}, {"type": "map_at_5", "value": 9.311}, {"type": "mrr_at_1", "value": 42.415000000000006}, {"type": "mrr_at_10", "value": 50.471}, {"type": "mrr_at_100", "value": 51.251999999999995}, {"type": "mrr_at_1000", "value": 51.292}, {"type": "mrr_at_3", "value": 48.4}, {"type": "mrr_at_5", "value": 49.809}, {"type": "ndcg_at_1", "value": 40.867}, {"type": "ndcg_at_10", "value": 30.303}, {"type": "ndcg_at_100", "value": 27.915}, {"type": "ndcg_at_1000", "value": 36.734}, {"type": "ndcg_at_3", "value": 35.74}, {"type": "ndcg_at_5", "value": 33.938}, {"type": "precision_at_1", "value": 42.415000000000006}, {"type": "precision_at_10", "value": 22.105}, {"type": "precision_at_100", "value": 7.173}, {"type": "precision_at_1000", "value": 2.007}, {"type": "precision_at_3", "value": 33.437}, {"type": "precision_at_5", "value": 29.349999999999998}, {"type": "recall_at_1", "value": 4.692}, {"type": "recall_at_10", "value": 14.798}, {"type": "recall_at_100", "value": 28.948}, {"type": "recall_at_1000", "value": 59.939}, {"type": "recall_at_3", "value": 8.562}, {"type": "recall_at_5", "value": 11.818}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 27.572999999999997}, {"type": "map_at_10", "value": 42.754}, {"type": "map_at_100", "value": 43.8}, {"type": "map_at_1000", "value": 43.838}, {"type": "map_at_3", "value": 38.157000000000004}, {"type": "map_at_5", "value": 40.9}, {"type": "mrr_at_1", "value": 31.373}, {"type": "mrr_at_10", "value": 45.321}, {"type": "mrr_at_100", "value": 46.109}, {"type": "mrr_at_1000", "value": 46.135}, {"type": "mrr_at_3", "value": 41.483}, {"type": "mrr_at_5", "value": 43.76}, {"type": "ndcg_at_1", "value": 31.373}, {"type": "ndcg_at_10", "value": 50.7}, {"type": "ndcg_at_100", "value": 55.103}, {"type": "ndcg_at_1000", "value": 55.955999999999996}, {"type": "ndcg_at_3", "value": 42.069}, {"type": "ndcg_at_5", "value": 46.595}, {"type": "precision_at_1", "value": 31.373}, {"type": "precision_at_10", "value": 8.601}, {"type": "precision_at_100", "value": 1.11}, {"type": "precision_at_1000", "value": 0.11900000000000001}, {"type": "precision_at_3", "value": 19.399}, {"type": "precision_at_5", "value": 14.224}, {"type": "recall_at_1", "value": 27.572999999999997}, {"type": "recall_at_10", "value": 72.465}, {"type": "recall_at_100", "value": 91.474}, {"type": "recall_at_1000", "value": 97.78099999999999}, {"type": "recall_at_3", "value": 50.087}, {"type": "recall_at_5", "value": 60.516000000000005}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.525}, {"type": "map_at_10", "value": 84.417}, {"type": "map_at_100", "value": 85.07000000000001}, {"type": "map_at_1000", "value": 85.085}, {"type": "map_at_3", "value": 81.45}, {"type": "map_at_5", "value": 83.317}, {"type": "mrr_at_1", "value": 81.17999999999999}, {"type": "mrr_at_10", "value": 87.34100000000001}, {"type": "mrr_at_100", "value": 87.461}, {"type": "mrr_at_1000", "value": 87.46199999999999}, {"type": "mrr_at_3", "value": 86.372}, {"type": "mrr_at_5", "value": 87.046}, {"type": "ndcg_at_1", "value": 81.17999999999999}, {"type": "ndcg_at_10", "value": 88.144}, {"type": "ndcg_at_100", "value": 89.424}, {"type": "ndcg_at_1000", "value": 89.517}, {"type": "ndcg_at_3", "value": 85.282}, {"type": "ndcg_at_5", "value": 86.874}, {"type": "precision_at_1", "value": 81.17999999999999}, {"type": "precision_at_10", "value": 13.385}, {"type": "precision_at_100", "value": 1.533}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.29}, {"type": "precision_at_5", "value": 24.546}, {"type": "recall_at_1", "value": 70.525}, {"type": "recall_at_10", "value": 95.22500000000001}, {"type": "recall_at_100", "value": 99.572}, {"type": "recall_at_1000", "value": 99.98899999999999}, {"type": "recall_at_3", "value": 87.035}, {"type": "recall_at_5", "value": 91.526}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 48.284384328108736}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 56.02508021518392}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.023000000000001}, {"type": "map_at_10", "value": 10.046}, {"type": "map_at_100", "value": 11.802999999999999}, {"type": "map_at_1000", "value": 12.074}, {"type": "map_at_3", "value": 7.071}, {"type": "map_at_5", "value": 8.556}, {"type": "mrr_at_1", "value": 19.8}, {"type": "mrr_at_10", "value": 30.105999999999998}, {"type": "mrr_at_100", "value": 31.16}, {"type": "mrr_at_1000", "value": 31.224}, {"type": "mrr_at_3", "value": 26.633000000000003}, {"type": "mrr_at_5", "value": 28.768}, {"type": "ndcg_at_1", "value": 19.8}, {"type": "ndcg_at_10", "value": 17.358}, {"type": "ndcg_at_100", "value": 24.566}, {"type": "ndcg_at_1000", "value": 29.653000000000002}, {"type": "ndcg_at_3", "value": 16.052}, {"type": "ndcg_at_5", "value": 14.325}, {"type": "precision_at_1", "value": 19.8}, {"type": "precision_at_10", "value": 9.07}, {"type": "precision_at_100", "value": 1.955}, {"type": "precision_at_1000", "value": 0.318}, {"type": "precision_at_3", "value": 14.933}, {"type": "precision_at_5", "value": 12.68}, {"type": "recall_at_1", "value": 4.023000000000001}, {"type": "recall_at_10", "value": 18.398}, {"type": "recall_at_100", "value": 39.683}, {"type": "recall_at_1000", "value": 64.625}, {"type": "recall_at_3", "value": 9.113}, {"type": "recall_at_5", "value": 12.873000000000001}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.90508618312852}, {"type": "cos_sim_spearman", "value": 83.01323463129205}, {"type": "euclidean_pearson", "value": 84.35845059002891}, {"type": "euclidean_spearman", "value": 82.85508559018527}, {"type": "manhattan_pearson", "value": 84.3682368950498}, {"type": "manhattan_spearman", "value": 82.8619728517302}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.28294535873366}, {"type": "cos_sim_spearman", "value": 81.61879268131732}, {"type": "euclidean_pearson", "value": 85.99053604863724}, {"type": "euclidean_spearman", "value": 80.95176684739084}, {"type": "manhattan_pearson", "value": 85.98054086663903}, {"type": "manhattan_spearman", "value": 80.9911070430335}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.15898098455258}, {"type": "cos_sim_spearman", "value": 86.8247985072307}, {"type": "euclidean_pearson", "value": 86.25342429918649}, {"type": "euclidean_spearman", "value": 87.13468603023252}, {"type": "manhattan_pearson", "value": 86.2006134067688}, {"type": "manhattan_spearman", "value": 87.06135811996896}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.57403998481877}, {"type": "cos_sim_spearman", "value": 83.55947075172618}, {"type": "euclidean_pearson", "value": 84.97097562965358}, {"type": "euclidean_spearman", "value": 83.6287075601467}, {"type": "manhattan_pearson", "value": 84.87092197104133}, {"type": "manhattan_spearman", "value": 83.53783891641335}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.14632780204231}, {"type": "cos_sim_spearman", "value": 88.74903634923868}, {"type": "euclidean_pearson", "value": 88.03922995855112}, {"type": "euclidean_spearman", "value": 88.72852190525855}, {"type": "manhattan_pearson", "value": 87.9694791024271}, {"type": "manhattan_spearman", "value": 88.66461452107418}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.75989818558652}, {"type": "cos_sim_spearman", "value": 86.03107893122942}, {"type": "euclidean_pearson", "value": 85.21908960133018}, {"type": "euclidean_spearman", "value": 85.93012720153482}, {"type": "manhattan_pearson", "value": 85.1969170195502}, {"type": "manhattan_spearman", "value": 85.8975254197784}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.16803898789955}, {"type": "cos_sim_spearman", "value": 88.56139047950525}, {"type": "euclidean_pearson", "value": 88.09685325747859}, {"type": "euclidean_spearman", "value": 88.0457609458947}, {"type": "manhattan_pearson", "value": 88.07054413001431}, {"type": "manhattan_spearman", "value": 88.10784098889314}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-en)", "type": "mteb/sts17-crosslingual-sts", "config": "es-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.7160384474547}, {"type": "cos_sim_spearman", "value": 86.4899235500562}, {"type": "euclidean_pearson", "value": 85.90854477703468}, {"type": "euclidean_spearman", "value": 86.16085009124498}, {"type": "manhattan_pearson", "value": 85.9249735317884}, {"type": "manhattan_spearman", "value": 86.25038421339116}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-es)", "type": "mteb/sts17-crosslingual-sts", "config": "es-es", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.37914622360788}, {"type": "cos_sim_spearman", "value": 88.24619159322809}, {"type": "euclidean_pearson", "value": 89.00538382632769}, {"type": "euclidean_spearman", "value": 88.44675863524736}, {"type": "manhattan_pearson", "value": 88.97372120683606}, {"type": "manhattan_spearman", "value": 88.33509324222129}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 66.22181360203069}, {"type": "cos_sim_spearman", "value": 65.6218291833768}, {"type": "euclidean_pearson", "value": 67.14543788822508}, {"type": "euclidean_spearman", "value": 65.21269939987857}, {"type": "manhattan_pearson", "value": 67.03304607195636}, {"type": "manhattan_spearman", "value": 65.18885316423805}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es)", "type": "mteb/sts22-crosslingual-sts", "config": "es", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 65.71694059677084}, {"type": "cos_sim_spearman", "value": 67.96591844540954}, {"type": "euclidean_pearson", "value": 65.6964079162296}, {"type": "euclidean_spearman", "value": 67.53027948900173}, {"type": "manhattan_pearson", "value": 65.93545097673741}, {"type": "manhattan_spearman", "value": 67.7261811805062}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-en)", "type": "mteb/sts22-crosslingual-sts", "config": "es-en", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.43544796375058}, {"type": "cos_sim_spearman", "value": 78.80462701160789}, {"type": "euclidean_pearson", "value": 76.19135575163138}, {"type": "euclidean_spearman", "value": 78.4974732597096}, {"type": "manhattan_pearson", "value": 76.3254742699264}, {"type": "manhattan_spearman", "value": 78.51884307690416}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.46805293607684}, {"type": "cos_sim_spearman", "value": 87.83792784689113}, {"type": "euclidean_pearson", "value": 87.3872143683234}, {"type": "euclidean_spearman", "value": 87.61611384542778}, {"type": "manhattan_pearson", "value": 87.38542672601992}, {"type": "manhattan_spearman", "value": 87.61423971087297}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSES", "type": "PlanTL-GOB-ES/sts-es", "config": "default", "split": "test", "revision": "0912bb6c9393c76d62a7c5ee81c4c817ff47c9f4"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.55286866116202}, {"type": "cos_sim_spearman", "value": 80.22150503320272}, {"type": "euclidean_pearson", "value": 83.27223445187087}, {"type": "euclidean_spearman", "value": 80.59078590992925}, {"type": "manhattan_pearson", "value": 83.23095887013197}, {"type": "manhattan_spearman", "value": 80.87994285189795}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 79.29717302265792}, {"type": "mrr", "value": 94.02156304117088}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 49.9}, {"type": "map_at_10", "value": 58.626}, {"type": "map_at_100", "value": 59.519999999999996}, {"type": "map_at_1000", "value": 59.55200000000001}, {"type": "map_at_3", "value": 56.232000000000006}, {"type": "map_at_5", "value": 57.833}, {"type": "mrr_at_1", "value": 52.333}, {"type": "mrr_at_10", "value": 60.039}, {"type": "mrr_at_100", "value": 60.732}, {"type": "mrr_at_1000", "value": 60.75899999999999}, {"type": "mrr_at_3", "value": 58.278}, {"type": "mrr_at_5", "value": 59.428000000000004}, {"type": "ndcg_at_1", "value": 52.333}, {"type": "ndcg_at_10", "value": 62.67}, {"type": "ndcg_at_100", "value": 66.465}, {"type": "ndcg_at_1000", "value": 67.425}, {"type": "ndcg_at_3", "value": 58.711999999999996}, {"type": "ndcg_at_5", "value": 60.958999999999996}, {"type": "precision_at_1", "value": 52.333}, {"type": "precision_at_10", "value": 8.333}, {"type": "precision_at_100", "value": 1.027}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 22.778000000000002}, {"type": "precision_at_5", "value": 15.267}, {"type": "recall_at_1", "value": 49.9}, {"type": "recall_at_10", "value": 73.394}, {"type": "recall_at_100", "value": 90.43299999999999}, {"type": "recall_at_1000", "value": 98.167}, {"type": "recall_at_3", "value": 63.032999999999994}, {"type": "recall_at_5", "value": 68.444}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB SpanishNewsClusteringP2P", "type": "jinaai/spanish_news_clustering", "config": "default", "split": "test", "revision": "b5edc3d3d7c12c7b9f883e9da50f6732f3624142"}, "metrics": [{"type": "v_measure", "value": 48.30543557796266}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SpanishPassageRetrievalS2P", "type": "jinaai/spanish_passage_retrieval", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 14.443}, {"type": "map_at_10", "value": 28.736}, {"type": "map_at_100", "value": 34.514}, {"type": "map_at_1000", "value": 35.004000000000005}, {"type": "map_at_3", "value": 20.308}, {"type": "map_at_5", "value": 25.404}, {"type": "mrr_at_1", "value": 50.29900000000001}, {"type": "mrr_at_10", "value": 63.757}, {"type": "mrr_at_100", "value": 64.238}, {"type": "mrr_at_1000", "value": 64.24600000000001}, {"type": "mrr_at_3", "value": 59.480999999999995}, {"type": "mrr_at_5", "value": 62.924}, {"type": "ndcg_at_1", "value": 50.29900000000001}, {"type": "ndcg_at_10", "value": 42.126999999999995}, {"type": "ndcg_at_100", "value": 57.208000000000006}, {"type": "ndcg_at_1000", "value": 60.646}, {"type": "ndcg_at_3", "value": 38.722}, {"type": "ndcg_at_5", "value": 40.007999999999996}, {"type": "precision_at_1", "value": 50.29900000000001}, {"type": "precision_at_10", "value": 19.82}, {"type": "precision_at_100", "value": 4.82}, {"type": "precision_at_1000", "value": 0.5910000000000001}, {"type": "precision_at_3", "value": 31.537}, {"type": "precision_at_5", "value": 28.262999999999998}, {"type": "recall_at_1", "value": 14.443}, {"type": "recall_at_10", "value": 43.885999999999996}, {"type": "recall_at_100", "value": 85.231}, {"type": "recall_at_1000", "value": 99.07000000000001}, {"type": "recall_at_3", "value": 22.486}, {"type": "recall_at_5", "value": 33.035}, {"type": "map_at_1", "value": 15.578}, {"type": "map_at_10", "value": 52.214000000000006}, {"type": "map_at_100", "value": 64.791}, {"type": "map_at_1000", "value": 64.791}, {"type": "map_at_3", "value": 33.396}, {"type": "map_at_5", "value": 41.728}, {"type": "mrr_at_1", "value": 73.653}, {"type": "mrr_at_10", "value": 85.116}, {"type": "mrr_at_100", "value": 85.205}, {"type": "mrr_at_1000", "value": 85.205}, {"type": "mrr_at_3", "value": 84.631}, {"type": "mrr_at_5", "value": 85.05}, {"type": "ndcg_at_1", "value": 76.64699999999999}, {"type": "ndcg_at_10", "value": 70.38600000000001}, {"type": "ndcg_at_100", "value": 82.27600000000001}, {"type": "ndcg_at_1000", "value": 82.27600000000001}, {"type": "ndcg_at_3", "value": 70.422}, {"type": "ndcg_at_5", "value": 69.545}, {"type": "precision_at_1", "value": 76.64699999999999}, {"type": "precision_at_10", "value": 43.653}, {"type": "precision_at_100", "value": 7.718999999999999}, {"type": "precision_at_1000", "value": 0.772}, {"type": "precision_at_3", "value": 64.671}, {"type": "precision_at_5", "value": 56.766000000000005}, {"type": "recall_at_1", "value": 15.578}, {"type": "recall_at_10", "value": 67.459}, {"type": "recall_at_100", "value": 100.0}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 36.922}, {"type": "recall_at_5", "value": 49.424}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.81683168316832}, {"type": "cos_sim_ap", "value": 95.61502659412484}, {"type": "cos_sim_f1", "value": 90.6813627254509}, {"type": "cos_sim_precision", "value": 90.86345381526104}, {"type": "cos_sim_recall", "value": 90.5}, {"type": "dot_accuracy", "value": 99.8039603960396}, {"type": "dot_ap", "value": 95.36783483182609}, {"type": "dot_f1", "value": 89.90825688073394}, {"type": "dot_precision", "value": 91.68399168399168}, {"type": "dot_recall", "value": 88.2}, {"type": "euclidean_accuracy", "value": 99.81188118811882}, {"type": "euclidean_ap", "value": 95.51583052324564}, {"type": "euclidean_f1", "value": 90.46214355948868}, {"type": "euclidean_precision", "value": 88.97485493230174}, {"type": "euclidean_recall", "value": 92.0}, {"type": "manhattan_accuracy", "value": 99.8079207920792}, {"type": "manhattan_ap", "value": 95.44030644653718}, {"type": "manhattan_f1", "value": 90.37698412698413}, {"type": "manhattan_precision", "value": 89.66535433070865}, {"type": "manhattan_recall", "value": 91.10000000000001}, {"type": "max_accuracy", "value": 99.81683168316832}, {"type": "max_ap", "value": 95.61502659412484}, {"type": "max_f1", "value": 90.6813627254509}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 55.39046705023096}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 33.57429225651293}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 50.17622570658746}, {"type": "mrr", "value": 50.99844293778118}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 29.97416289382191}, {"type": "cos_sim_spearman", "value": 29.871890597161432}, {"type": "dot_pearson", "value": 28.768845892613644}, {"type": "dot_spearman", "value": 28.872458999448686}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.22599999999999998}, {"type": "map_at_10", "value": 1.646}, {"type": "map_at_100", "value": 9.491}, {"type": "map_at_1000", "value": 23.75}, {"type": "map_at_3", "value": 0.588}, {"type": "map_at_5", "value": 0.9129999999999999}, {"type": "mrr_at_1", "value": 84.0}, {"type": "mrr_at_10", "value": 89.889}, {"type": "mrr_at_100", "value": 89.889}, {"type": "mrr_at_1000", "value": 89.889}, {"type": "mrr_at_3", "value": 89.667}, {"type": "mrr_at_5", "value": 89.667}, {"type": "ndcg_at_1", "value": 75.0}, {"type": "ndcg_at_10", "value": 67.368}, {"type": "ndcg_at_100", "value": 52.834}, {"type": "ndcg_at_1000", "value": 49.144}, {"type": "ndcg_at_3", "value": 72.866}, {"type": "ndcg_at_5", "value": 70.16}, {"type": "precision_at_1", "value": 84.0}, {"type": "precision_at_10", "value": 71.8}, {"type": "precision_at_100", "value": 54.04}, {"type": "precision_at_1000", "value": 21.709999999999997}, {"type": "precision_at_3", "value": 77.333}, {"type": "precision_at_5", "value": 74.0}, {"type": "recall_at_1", "value": 0.22599999999999998}, {"type": "recall_at_10", "value": 1.9029999999999998}, {"type": "recall_at_100", "value": 13.012}, {"type": "recall_at_1000", "value": 46.105000000000004}, {"type": "recall_at_3", "value": 0.63}, {"type": "recall_at_5", "value": 1.0030000000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 1.5}, {"type": "map_at_10", "value": 8.193999999999999}, {"type": "map_at_100", "value": 14.01}, {"type": "map_at_1000", "value": 15.570999999999998}, {"type": "map_at_3", "value": 4.361000000000001}, {"type": "map_at_5", "value": 5.9270000000000005}, {"type": "mrr_at_1", "value": 16.326999999999998}, {"type": "mrr_at_10", "value": 33.326}, {"type": "mrr_at_100", "value": 34.592}, {"type": "mrr_at_1000", "value": 34.592}, {"type": "mrr_at_3", "value": 29.252}, {"type": "mrr_at_5", "value": 30.680000000000003}, {"type": "ndcg_at_1", "value": 15.306000000000001}, {"type": "ndcg_at_10", "value": 19.819}, {"type": "ndcg_at_100", "value": 33.428000000000004}, {"type": "ndcg_at_1000", "value": 45.024}, {"type": "ndcg_at_3", "value": 19.667}, {"type": "ndcg_at_5", "value": 19.625}, {"type": "precision_at_1", "value": 16.326999999999998}, {"type": "precision_at_10", "value": 18.367}, {"type": "precision_at_100", "value": 7.367}, {"type": "precision_at_1000", "value": 1.496}, {"type": "precision_at_3", "value": 23.128999999999998}, {"type": "precision_at_5", "value": 21.633}, {"type": "recall_at_1", "value": 1.5}, {"type": "recall_at_10", "value": 14.362}, {"type": "recall_at_100", "value": 45.842}, {"type": "recall_at_1000", "value": 80.42}, {"type": "recall_at_3", "value": 5.99}, {"type": "recall_at_5", "value": 8.701}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 70.04740000000001}, {"type": "ap", "value": 13.58661943759992}, {"type": "f1", "value": 53.727487131754195}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 61.06395019807584}, {"type": "f1", "value": 61.36753664680866}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 40.19881263066229}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 85.19401561661799}, {"type": "cos_sim_ap", "value": 71.62462506173092}, {"type": "cos_sim_f1", "value": 66.0641327225455}, {"type": "cos_sim_precision", "value": 62.234662934453}, {"type": "cos_sim_recall", "value": 70.3957783641161}, {"type": "dot_accuracy", "value": 84.69333015437802}, {"type": "dot_ap", "value": 69.83805526490895}, {"type": "dot_f1", "value": 64.85446235265817}, {"type": "dot_precision", "value": 59.59328028293546}, {"type": "dot_recall", "value": 71.13456464379946}, {"type": "euclidean_accuracy", "value": 85.38475293556655}, {"type": "euclidean_ap", "value": 72.05594596250286}, {"type": "euclidean_f1", "value": 66.53543307086615}, {"type": "euclidean_precision", "value": 62.332872291378514}, {"type": "euclidean_recall", "value": 71.34564643799473}, {"type": "manhattan_accuracy", "value": 85.3907134767837}, {"type": "manhattan_ap", "value": 72.04585410650152}, {"type": "manhattan_f1", "value": 66.57132642116554}, {"type": "manhattan_precision", "value": 60.704194740273856}, {"type": "manhattan_recall", "value": 73.6939313984169}, {"type": "max_accuracy", "value": 85.3907134767837}, {"type": "max_ap", "value": 72.05594596250286}, {"type": "max_f1", "value": 66.57132642116554}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.30414871735165}, {"type": "cos_sim_ap", "value": 86.4398673359918}, {"type": "cos_sim_f1", "value": 78.9243598692186}, {"type": "cos_sim_precision", "value": 75.47249350101876}, {"type": "cos_sim_recall", "value": 82.7071142593163}, {"type": "dot_accuracy", "value": 89.26145845461248}, {"type": "dot_ap", "value": 86.32172118414802}, {"type": "dot_f1", "value": 78.8277467755645}, {"type": "dot_precision", "value": 75.79418662497335}, {"type": "dot_recall", "value": 82.11425931629196}, {"type": "euclidean_accuracy", "value": 89.24205378973105}, {"type": "euclidean_ap", "value": 86.23988673522649}, {"type": "euclidean_f1", "value": 78.67984857951413}, {"type": "euclidean_precision", "value": 75.2689684269742}, {"type": "euclidean_recall", "value": 82.41453649522637}, {"type": "manhattan_accuracy", "value": 89.18189932859859}, {"type": "manhattan_ap", "value": 86.21003833972824}, {"type": "manhattan_f1", "value": 78.70972564850115}, {"type": "manhattan_precision", "value": 76.485544094145}, {"type": "manhattan_recall", "value": 81.0671388974438}, {"type": "max_accuracy", "value": 89.30414871735165}, {"type": "max_ap", "value": 86.4398673359918}, {"type": "max_f1", "value": 78.9243598692186}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB WikiCitiesClustering", "type": "jinaai/cities_wiki_clustering", "config": "default", "split": "test", "revision": "ddc9ee9242fa65332597f70e967ecc38b9d734fa"}, "metrics": [{"type": "v_measure", "value": 73.254610626148}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB XMarketES", "type": "jinaai/xmarket_ml", "config": "default", "split": "test", "revision": "705db869e8107dfe6e34b832af90446e77d813e3"}, "metrics": [{"type": "map_at_1", "value": 5.506}, {"type": "map_at_10", "value": 11.546}, {"type": "map_at_100", "value": 14.299999999999999}, {"type": "map_at_1000", "value": 15.146999999999998}, {"type": "map_at_3", "value": 8.748000000000001}, {"type": "map_at_5", "value": 10.036000000000001}, {"type": "mrr_at_1", "value": 17.902}, {"type": "mrr_at_10", "value": 25.698999999999998}, {"type": "mrr_at_100", "value": 26.634}, {"type": "mrr_at_1000", "value": 26.704}, {"type": "mrr_at_3", "value": 23.244999999999997}, {"type": "mrr_at_5", "value": 24.555}, {"type": "ndcg_at_1", "value": 17.902}, {"type": "ndcg_at_10", "value": 19.714000000000002}, {"type": "ndcg_at_100", "value": 25.363000000000003}, {"type": "ndcg_at_1000", "value": 30.903999999999996}, {"type": "ndcg_at_3", "value": 17.884}, {"type": "ndcg_at_5", "value": 18.462}, {"type": "precision_at_1", "value": 17.902}, {"type": "precision_at_10", "value": 10.467}, {"type": "precision_at_100", "value": 3.9699999999999998}, {"type": "precision_at_1000", "value": 1.1320000000000001}, {"type": "precision_at_3", "value": 14.387}, {"type": "precision_at_5", "value": 12.727}, {"type": "recall_at_1", "value": 5.506}, {"type": "recall_at_10", "value": 19.997999999999998}, {"type": "recall_at_100", "value": 42.947}, {"type": "recall_at_1000", "value": 67.333}, {"type": "recall_at_3", "value": 11.158}, {"type": "recall_at_5", "value": 14.577000000000002}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB XPQAESRetrieval", "type": "jinaai/xpqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 32.53}, {"type": "map_at_10", "value": 58.68600000000001}, {"type": "map_at_100", "value": 60.45399999999999}, {"type": "map_at_1000", "value": 60.51499999999999}, {"type": "map_at_3", "value": 50.356}, {"type": "map_at_5", "value": 55.98}, {"type": "mrr_at_1", "value": 61.791}, {"type": "mrr_at_10", "value": 68.952}, {"type": "mrr_at_100", "value": 69.524}, {"type": "mrr_at_1000", "value": 69.538}, {"type": "mrr_at_3", "value": 67.087}, {"type": "mrr_at_5", "value": 68.052}, {"type": "ndcg_at_1", "value": 61.791}, {"type": "ndcg_at_10", "value": 65.359}, {"type": "ndcg_at_100", "value": 70.95700000000001}, {"type": "ndcg_at_1000", "value": 71.881}, {"type": "ndcg_at_3", "value": 59.999}, {"type": "ndcg_at_5", "value": 61.316}, {"type": "precision_at_1", "value": 61.791}, {"type": "precision_at_10", "value": 18.184}, {"type": "precision_at_100", "value": 2.317}, {"type": "precision_at_1000", "value": 0.245}, {"type": "precision_at_3", "value": 42.203}, {"type": "precision_at_5", "value": 31.374999999999996}, {"type": "recall_at_1", "value": 32.53}, {"type": "recall_at_10", "value": 73.098}, {"type": "recall_at_100", "value": 94.029}, {"type": "recall_at_1000", "value": 99.842}, {"type": "recall_at_3", "value": 54.525}, {"type": "recall_at_5", "value": 63.796}]}]}]}
dataset
null
584
Merikatorihuhu/SimCSE-finetuned-vietnamese-legal-documents
Merikatorihuhu
sentence-similarity
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:120210", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:VoVanPhuc/sup-SimCSE-VietNamese-phobert-base", "base_model:finetune:VoVanPhuc/sup-SimCSE-VietNamese-phobert-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-10-29T11:36:18Z
2024-10-29T11:36:33+00:00
7
1
--- base_model: VoVanPhuc/sup-SimCSE-VietNamese-phobert-base library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:120210 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Chủ tịch Ủy ban nhân dân xã có quyền ra quyết định cưỡng chế tháo dỡ công trình xây dựng trên đất nông nghiệp khi chưa chuyển mục đích sử dụng đất hay không? sentences: - 'Đối tượng, điều kiện kéo dài tuổi phục vụ tại ngũ 1. Đối tượng: a) Quân nhân chuyên nghiệp có trình độ cao đẳng trở lên đang đảm nhiệm các chức danh: Kỹ thuật viên, Nhân viên Kỹ thuật, Huấn luyện viên, Nghệ sĩ, Nhạc sĩ, Diễn viên làm việc đúng chuyên ngành đào tạo ở các cơ sở nghiên cứu, nhà trường, bệnh viện, trung tâm thể dục thể thao, đoàn nghệ thuật, nhà máy, doanh nghiệp quốc phòng; đơn vị đóng quân ở địa bàn vùng sâu, vùng xa, biên giới, hải đảo. b) Quân nhân chuyên nghiệp đang làm việc thuộc các chuyên ngành hẹp được đào tạo công phu hoặc chuyên ngành Quân đội chưa đào tạo được; thợ bậc cao. c) Quân nhân chuyên nghiệp đang đảm nhiệm chức vụ chỉ huy, quản lý ở các nhà máy, doanh nghiệp quốc phòng. d) Quân nhân chuyên nghiệp không thuộc đối tượng quy định tại điểm a, điểm b, điểm c khoản này do Bộ trưởng Bộ Quốc phòng quyết định. 2. Điều kiện: Quân nhân chuyên nghiệp thuộc đối tượng quy định tại khoản 1 Điều này được kéo dài tuổi phục vụ tại ngũ khi có đủ các điều kiện sau: a) Đơn vị có biên chế và nhu cầu sử dụng; b) Hết hạn tuổi phục vụ tại ngũ cao nhất theo cấp bậc quân hàm quy định tại khoản 2 Điều 17 Luật Quân nhân chuyên nghiệp, công nhân và viên chức quốc phòng; chưa có người thay thế; tự nguyện tiếp tục phục vụ tại ngũ; c) Có đủ phẩm chất chính trị, đạo đức, sức khỏe để hoàn thành nhiệm vụ được giao; d) Có trình độ chuyên môn kỹ thuật, nghiệp vụ giỏi; tay nghề cao; chất lượng, hiệu quả công tác tốt.' - 'Thi hành quyết định cưỡng chế 1. Người ra quyết định cưỡng chế có trách nhiệm gửi ngay quyết định cưỡng chế cho các cá nhân, tổ chức liên quan và tổ chức thực hiện việc cưỡng chế thi hành quyết định xử phạt của mình và của cấp dưới. ..."' - 'Trình tự, thủ tục đăng ký tài khoản định danh điện tử đối với công dân Việt Nam 1. Đăng ký tài khoản định danh điện tử mức độ 1 qua ứng dụng VNelD đối với công dân đã có thẻ Căn cước công dân gắn chíp điện tử a) Công dân sử dụng thiết bị di động tải và cài đặt ứng dụng VNelD. b) Công dân sử dụng ứng dụng VNelD để nhập thông tin về số định danh cá nhân và số điện thoại hoặc địa chỉ thư điện tử; cung cấp các thông tin theo hướng dẫn trên ứng dụng VNelD; thu nhận ảnh chân dung bằng thiết bị di động và gửi yêu cầu đề nghị cấp tài khoản định danh điện tử tới cơ quan quản lý định danh và xác thực điện tử qua ứng dụng VNelD. c) Cơ quan quản lý định danh điện tử thông báo kết quả đăng ký tài khoản qua ứng dụng VNelD hoặc tin nhắn SMS hoặc địa chỉ thư điện tử. 2. Đăng ký tài khoản định danh điện tử mức độ 2 a) Đối với công dân đã được cấp thẻ Căn cước công dân gắn chíp điện tử: Công dân đến Công an xã, phường, thị trấn hoặc nơi làm thủ tục cấp thẻ Căn cước công dân để làm thủ tục cấp tài khoản định danh điện tử. Công dân xuất trình thẻ Căn cước công dân gắn chíp điện tử, cung cấp thông tin về số điện thoại hoặc địa chỉ thư điện tử và đề nghị bổ sung thông tin được tích hợp vào tài khoản định danh điện tử. Cán bộ tiếp nhận nhập thông tin công dân cung cấp vào hệ thống định danh và xác thực điện tử; chụp ảnh chân dung, thu nhận vân tay của công dân đến làm thủ tục để xác thực với Cơ sở dữ liệu căn cước công dân và khẳng định sự đồng ý đăng ký tạo lập tài khoản định danh điện tử. Cơ quan quản lý định danh điện tử thông báo kết quả đăng ký tài khoản qua ứng dụng VNelD hoặc tin nhắn SMS hoặc địa chỉ thư điện tử. b) Cơ quan Công an tiến hành cấp tài khoản định danh điện tử mức độ 2 cùng với cấp thẻ Căn cước công dân với trường hợp công dân chưa được cấp Căn cước công dân gắn chíp điện tử.' - source_sentence: Mức hưởng chế độ thai sản đối với lao động nam là người nước ngoài được pháp luật quy định như thế nào? sentences: - '"Điều 21. Thông báo kết quả và xác nhận nhập học 1. Cơ sở đào tạo gửi giấy báo trúng tuyển cho những thí sinh trúng tuyển, trong đó ghi rõ những thủ tục cần thiết đối với thí sinh khi nhập học và phương thức nhập học của thí sinh. 2. Thí sinh xác nhận nhập học bằng hình thức trực tuyến trên hệ thống, trước khi nhập học tại cơ sở đào tạo. 3. Đối với những thí sinh không xác nhận nhập học trong thời hạn quy định: a) Nếu không có lý do chính đáng thì coi như thí sinh từ chối nhập học và cơ sở đào tạo có quyền không tiếp nhận; b) Nếu do ốm đau, tai nạn, có giấy xác nhận của bệnh viện quận, huyện trở lên hoặc do thiên tai có xác nhận của UBND quận, huyện trở lên, cơ sở đào tạo xem xét quyết định tiếp nhận thí sinh vào học hoặc bảo lưu kết quả tuyển sinh để thí sinh vào học sau; c) Nếu do sai sót, nhầm lẫn của cán bộ thực hiện công tác tuyển sinh hoặc cá nhân thí sinh gây ra, cơ sở đào tạo chủ động phối hợp với các cá nhân, tổ chức liên quan xem xét các minh chứng và quyết định việc tiếp nhận thí sinh vào học hoặc bảo lưu kết quả tuyển sinh để thí sinh vào học sau. 4. Thí sinh đã xác nhận nhập học tại một cơ sở đào tạo không được tham gia xét tuyển ở nơi khác hoặc ở các đợt xét tuyển bổ sung, trừ trường hợp được cơ sở đào tạo cho phép."' - 'Tổ chức, nhiệm vụ, quyền hạn của Ban Chỉ huy ... 2. Nhiệm vụ, quyền hạn của Ban Chỉ huy: a) Chỉ đạo xây dựng, ban hành quy định về công tác bảo đảm an toàn PCCC và CNCH tại Trụ sở cơ quan Bộ Tư pháp. b) Hướng dẫn, phối hợp với các đơn vị thuộc Bộ và chỉ đạo Đội PCCC và CNCH cơ sở tổ chức tuyên truyền, bồi dưỡng nghiệp vụ PCCC và CNCH. c) Chỉ đạo Đội PCCC và CNCH cơ sở tại Trụ sở cơ quan Bộ Tư pháp xây dựng, trình cấp có thẩm quyền phê duyệt và tổ chức thực tập phương án PCCC, phương án CNCH. d) Chỉ đạo Đội PCCC và CNCH cơ sở tại Trụ sở cơ quan Bộ Tư pháp quản lý các trang thiết bị PCCC và CNCH. đ) Chỉ đạo chữa cháy, CNCH khi xảy ra cháy, sự cố, tai nạn tại Trụ sở cơ quan Bộ Tư pháp. e) Chỉ đạo việc tổ chức lập và lưu giữ hồ sơ quản lý, theo dõi hoạt động PCCC, CNCH tại Trụ sở cơ quan Bộ Tư pháp. g) Chỉ đạo việc sơ kết, tổng kết các hoạt động về PCCC và CNCH của cơ quan; kiểm tra, đôn đốc việc chấp hành các quy định về PCCC và CNCH. h) Đề xuất việc khen thưởng, kỷ luật các tập thể, cá nhân trong việc thực hiện công tác PCCC, CNCH. i) Chỉ đạo Đội PCCC và CNCH cơ sở dự trù kinh phí cho các hoạt động PCCC và CNCH tại Trụ sở cơ quan Bộ Tư pháp. k) Thực hiện các nhiệm vụ khác do Bộ trưởng giao và theo quy định của pháp luật.' - 'Mức hưởng chế độ thai sản ... b) Mức hưởng một ngày đối với trường hợp quy định tại Điều 32 và khoản 2 Điều 34 của Luật này được tính bằng mức hưởng chế độ thai sản theo tháng chia cho 24 ngày.' - source_sentence: Doanh nghiệp được áp dụng chế độ ưu tiên không cung cấp báo cáo kiểm toán đúng thời hạn bị phạt bao nhiêu tiền? sentences: - 'Thay đổi Thẩm phán, Hội thẩm 1. Thẩm phán, Hội thẩm phải từ chối tham gia xét xử hoặc bị thay đổi khi thuộc một trong các trường hợp: a) Trường hợp quy định tại Điều 49 của Bộ luật này; b) Họ cùng trong một Hội đồng xét xử và là người thân thích với nhau; c) Đã tham gia xét xử sơ thẩm hoặc phúc thẩm hoặc tiến hành tố tụng vụ án đó với tư cách là Điều tra viên, Cán bộ điều tra, Kiểm sát viên, Kiểm tra viên, Thẩm tra viên, Thư ký Tòa án. 2. Việc thay đổi Thẩm phán, Hội thẩm trước khi mở phiên tòa do Chánh án hoặc Phó Chánh án Tòa án được phân công giải quyết vụ án quyết định. Thẩm phán bị thay đổi là Chánh án Tòa án thì do Chánh án Tòa án trên một cấp quyết định. Việc thay đổi Thẩm phán, Hội thẩm tại phiên tòa do Hội đồng xét xử quyết định trước khi bắt đầu xét hỏi bằng cách biểu quyết tại phòng nghị án. Khi xem xét thay đổi thành viên nào thì thành viên đó được trình bày ý kiến của mình, Hội đồng quyết định theo đa số. Trường hợp phải thay đổi Thẩm phán, Hội thẩm tại phiên tòa thì Hội đồng xét xử ra quyết định hoãn phiên tòa.' - '“Điều 21. Chấm dứt hưởng trợ cấp thất nghiệp 1. Các trường hợp người lao động đang hưởng trợ cấp thất nghiệp bị chấm dứt hưởng trợ cấp thất nghiệp được quy định như sau: e) Trong thời gian hưởng trợ cấp thất nghiệp, 03 tháng liên tục không thực hiện thông báo hằng tháng về việc tìm kiếm việc làm với trung tâm dịch vụ việc làm theo quy định Ngày mà người lao động được xác định bị chấm dứt hưởng trợ cấp thất nghiệp là ngày kết thúc của thời hạn thông báo tìm kiếm việc làm của tháng thứ 3 liên tục mà người lao động không thực hiện thông báo hằng tháng về việc tìm kiếm việc làm."' - 'Vi phạm quy định về thời hạn làm thủ tục hải quan, nộp hồ sơ thuế ... 2. Phạt tiền từ 1.000.000 đồng đến 2.000.000 đồng đối với hành vi không thực hiện đúng thời hạn quy định thuộc một trong các trường hợp sau: a) Cung cấp báo cáo kiểm toán, báo cáo tài chính của doanh nghiệp được áp dụng chế độ ưu tiên; b) Thông báo cho cơ quan hải quan quyết định xử lý vi phạm pháp luật về quản lý thuế, kế toán đối với doanh nghiệp được áp dụng chế độ ưu tiên; c) Báo cáo về lượng hàng hóa nhập khẩu phục vụ xây dựng nhà xưởng, hàng hóa gửi kho bên ngoài của doanh nghiệp chế xuất; d) Báo cáo về lượng hàng hóa trung chuyển đưa vào, đưa ra, còn lưu tại cảng; đ) Báo cáo thống kê thông quan hàng bưu chính đưa vào Việt Nam để chuyển tiếp đi quốc tế. ...' - source_sentence: Tài chính của Hội Kiểm toán viên hành nghề Việt Nam được chi cho những khoản nào? sentences: - 'Giải thể và xử lý tài chính khi giải thể 1. Khi xét thấy hoạt động của Hội không có hiệu quả, không mang lại lợi ích cho Hội viên hoặc gây phiền hà, cản trở cho Hội viên thì BCH Hội quyết định triệu tập Đại hội để bàn biện pháp củng cố tổ chức hoặc giải thể Hội. Nếu giải thể Hội thì do Đại hội đại biểu hoặc Đại hội toàn quốc của Hội thông qua và đề nghị cơ quan Nhà nước có thẩm quyền xem xét, quyết định. 2. Khi Hội bị giải thể, Ban Thường trực và Ban Kiểm tra của Hội phải tiến hành kiểm kê tài sản, kiểm quỹ và báo cáo BCH Hội quyết định việc xử lý tài sản, tiền tồn quỹ và tiến hành thủ tục giải thể theo quy định của pháp luật.' - '"Điều 14. Miễn trừ đối với thỏa thuận hạn chế cạnh tranh bị cấm 1. Thỏa thuận hạn chế cạnh tranh quy định tại các khoản 1, 2, 3, 7, 8, 9, 10 và 11 Điều 11 bị cấm theo quy định tại Điều 12 của Luật này được miễn trừ có thời hạn nếu có lợi cho người tiêu dùng và đáp ứng một trong các điều kiện sau đây: a) Tác động thúc đẩy tiến bộ kỹ thuật, công nghệ, nâng cao chất lượng hàng hóa, dịch vụ; b) Tăng cường sức cạnh tranh của doanh nghiệp Việt Nam trên thị trường quốc tế; c) Thúc đẩy việc áp dụng thống nhất tiêu chuẩn chất lượng, định mức kỹ thuật của chủng loại sản phẩm; d) Thống nhất các điều kiện thực hiện hợp đồng, giao hàng, thanh toán nhưng không liên quan đến giá và các yếu tố của giá. 2. Thỏa thuận lao động, thỏa thuận hợp tác trong các ngành, lĩnh vực đặc thù được thực hiện theo quy định của luật khác thì thực hiện theo quy định của luật đó".' - '"Điều 2. Sửa đổi, bổ sung một số điều của Nghị định số 15/2019/NĐ-CP ngày 01 tháng 02 năm 2019 của Chính phủ quy định chi tiết một số điều và biện pháp thi hành Luật Giáo dục nghề nghiệp ... 12. Sửa đổi, bổ sung Điều 24 như sau: Điều 24. Thẩm quyền cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với nước ngoài 1. Tổng cục Giáo dục nghề nghiệp cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với nước ngoài đối với trường cao đẳng. 2. Sở Lao động - Thương binh và Xã hội nơi trường trung cấp, trung tâm giáo dục nghề nghiệp, trung tâm giáo dục nghề nghiệp - giáo dục thường xuyên và doanh nghiệp tổ chức hoạt động liên kết đào tạo với nước ngoài cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với nước ngoài đối với trường trung cấp, trung tâm giáo dục nghề nghiệp, trung tâm giáo dục nghề nghiệp - giáo dục thường xuyên và doanh nghiệp."' - source_sentence: NLĐ ký nhiều hợp đồng lao động thì đóng BHYT như thế nào? sentences: - 'Hồ sơ, thủ tục xác định trường hợp được bồi thường [...] 3. Trong thời hạn 05 ngày làm việc, kể từ ngày nhận được đơn và các giấy tờ hợp lệ, nếu xác định yêu cầu thuộc trách nhiệm giải quyết của mình thì Sở Y tế phải thụ lý và thông báo bằng văn bản về việc thụ lý đơn cho người bị thiệt hại hoặc thân nhân của người bị thiệt hại (sau đây gọi tắt là người bị thiệt hại). Trường hợp hồ sơ không đầy đủ thì Sở Y tế có văn bản hướng dẫn người bị thiệt hại bổ sung. 4. Trong thời hạn 15 ngày, kể từ ngày nhận được đơn yêu cầu của người bị thiệt hại, Sở Y tế phải hoàn thành việc xác định nguyên nhân gây tai biến, mức độ tổn thương và thông báo bằng văn bản cho người yêu cầu đồng thời báo cáo Bộ Y tế.' - 'Chuyển nhượng quyền thăm dò khoáng sản 1. Tổ chức, cá nhân nhận chuyển nhượng quyền thăm dò khoáng sản phải có đủ điều kiện để được cấp Giấy phép thăm dò khoáng sản theo quy định của Luật này. 2. Việc chuyển nhượng quyền thăm dò khoáng sản phải được cơ quan quản lý nhà nước có thẩm quyền cấp Giấy phép thăm dò khoáng sản chấp thuận; trường hợp được chấp thuận, tổ chức, cá nhân nhận chuyển nhượng quyền thăm dò khoáng sản được cấp Giấy phép thăm dò khoáng sản mới. 3. Tổ chức, cá nhân chuyển nhượng quyền thăm dò khoáng sản đã thực hiện được ít nhất 50% dự toán của đề án thăm dò khoáng sản. 4. Chính phủ quy định chi tiết việc chuyển nhượng quyền thăm dò khoáng sản.' - '"Sửa đổi, bổ sung một số điều của Luật bảo hiểm y tế: ... 6. Sửa đổi, bổ sung Điều 12 như sau: “Điều 12. Đối tượng tham gia bảo hiểm y tế 1. Nhóm do người lao động và người sử dụng lao động đóng, bao gồm: a) Người lao động làm việc theo hợp đồng lao động không xác định thời hạn, hợp đồng lao động có thời hạn từ đủ 3 tháng trở lên; người lao động là người quản lý doanh nghiệp hưởng tiền lương; cán bộ, công chức, viên chức (sau đây gọi chung là người lao động); b) Người hoạt động không chuyên trách ở xã, phường, thị trấn theo quy định của pháp luật.= ... 4. Nhóm được ngân sách nhà nước hỗ trợ mức đóng, bao gồm: a) Người thuộc hộ gia đình cận nghèo; b) Học sinh, sinh viên. 5. Nhóm tham gia bảo hiểm y tế theo hộ gia đình gồm những người thuộc hộ gia đình, trừ đối tượng quy định tại các khoản 1, 2, 3 và 4 Điều này. 6. Chính phủ quy định các đối tượng khác ngoài các đối tượng quy định tại các khoản 3, 4 và 5 Điều này; quy định việc cấp thẻ bảo hiểm y tế đối với đối tượng do Bộ Quốc phòng, Bộ Công an quản lý và đối tượng quy định tại điểm 1 khoản 3 Điều này; quy định lộ trình thực hiện bảo hiểm y tế, phạm vi quyền lợi, mức hưởng bảo hiểm y tế, khám bệnh, chữa bệnh bảo hiểm y tế, quản lý, sử dụng phần kinh phí dành cho khám bệnh, chữa bệnh bảo hiểm y tế, giám định bảo hiểm y tế, thanh toán, quyết toán bảo hiểm y tế đối với các đối tượng quy định tại điểm a khoản 3 Điều này.”' --- # SentenceTransformer based on VoVanPhuc/sup-SimCSE-VietNamese-phobert-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) <!-- at revision 608779b86741a8acd8c8d38132974ff04086b138 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Merikatorihuhu/SimCSE-finetuned-vietnamese-legal-documents") # Run inference sentences = [ 'NLĐ ký nhiều hợp đồng lao động thì đóng BHYT như thế nào?', '"Sửa đổi, bổ sung một số điều của Luật bảo hiểm y tế:\n...\n6. Sửa đổi, bổ sung Điều 12 như sau:\n“Điều 12. Đối tượng tham gia bảo hiểm y tế\n1. Nhóm do người lao động và người sử dụng lao động đóng, bao gồm:\na) Người lao động làm việc theo hợp đồng lao động không xác định thời hạn, hợp đồng lao động có thời hạn từ đủ 3 tháng trở lên; người lao động là người quản lý doanh nghiệp hưởng tiền lương; cán bộ, công chức, viên chức (sau đây gọi chung là người lao động);\nb) Người hoạt động không chuyên trách ở xã, phường, thị trấn theo quy định của pháp luật.=\n...\n4. Nhóm được ngân sách nhà nước hỗ trợ mức đóng, bao gồm:\na) Người thuộc hộ gia đình cận nghèo;\nb) Học sinh, sinh viên.\n5. Nhóm tham gia bảo hiểm y tế theo hộ gia đình gồm những người thuộc hộ gia đình, trừ đối tượng quy định tại các khoản 1, 2, 3 và 4 Điều này.\n6. Chính phủ quy định các đối tượng khác ngoài các đối tượng quy định tại các khoản 3, 4 và 5 Điều này; quy định việc cấp thẻ bảo hiểm y tế đối với đối tượng do Bộ Quốc phòng, Bộ Công an quản lý và đối tượng quy định tại điểm 1 khoản 3 Điều này; quy định lộ trình thực hiện bảo hiểm y tế, phạm vi quyền lợi, mức hưởng bảo hiểm y tế, khám bệnh, chữa bệnh bảo hiểm y tế, quản lý, sử dụng phần kinh phí dành cho khám bệnh, chữa bệnh bảo hiểm y tế, giám định bảo hiểm y tế, thanh toán, quyết toán bảo hiểm y tế đối với các đối tượng quy định tại điểm a khoản 3 Điều này.”', 'Hồ sơ, thủ tục xác định trường hợp được bồi thường\n[...]\n3. Trong thời hạn 05 ngày làm việc, kể từ ngày nhận được đơn và các giấy tờ hợp lệ, nếu xác định yêu cầu thuộc trách nhiệm giải quyết của mình thì Sở Y tế phải thụ lý và thông báo bằng văn bản về việc thụ lý đơn cho người bị thiệt hại hoặc thân nhân của người bị thiệt hại (sau đây gọi tắt là người bị thiệt hại). Trường hợp hồ sơ không đầy đủ thì Sở Y tế có văn bản hướng dẫn người bị thiệt hại bổ sung.\n4. Trong thời hạn 15 ngày, kể từ ngày nhận được đơn yêu cầu của người bị thiệt hại, Sở Y tế phải hoàn thành việc xác định nguyên nhân gây tai biến, mức độ tổn thương và thông báo bằng văn bản cho người yêu cầu đồng thời báo cáo Bộ Y tế.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 120,210 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 25.08 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 206.98 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | anchor | positive | |:--------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật<br>Trong phạm vi điều chỉnh của văn bản quy phạm pháp luật:<br>1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.<br>2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.<br>3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> | | <code>Điều kiện để giáo viên trong cơ sở giáo dục mầm non, tiểu học ngoài công lập bị ảnh hưởng bởi Covid-19 được hưởng chính sách hỗ trợ là gì?</code> | <code>Điều kiện được hưởng<br>Cán bộ quản lý, giáo viên, nhân viên được hưởng chính sách khi bảo đảm các điều kiện sau:<br>1. Là người đang làm việc tại cơ sở giáo dục ngoài công lập trước khi cơ sở phải tạm dừng hoạt động theo yêu cầu của cơ quan nhà nước có thẩm quyền để phòng, chống dịch COVID-19 tính từ ngày 01 tháng 5 năm 2021 đến hết ngày 31 tháng 12 năm 2021.<br>2. Nghỉ việc không hưởng lương từ 01 tháng trở lên tính từ ngày 01 tháng 5 năm 2021 đến hết ngày 31 tháng 12 năm 2021.<br>3. Chưa được hưởng chính sách hỗ trợ đối với người lao động tạm hoãn hợp đồng lao động, nghỉ việc không hưởng lương theo quy định tại khoản 4, khoản 5, khoản 6 Mục II Nghị quyết số 68/NQ-CP ngày 01 tháng 7 năm 2021 của Chính phủ về một số chính sách hỗ trợ người lao động và người sử dụng lao động gặp khó khăn do đại dịch COVID-19, Nghị quyết số 126/NQ-CP ngày 08 tháng 10 năm 2021 của Chính phủ sửa đổi, bổ sung Nghị quyết số 68/NQ-CP ngày 01 tháng 7 năm 2021 của Chính phủ về một số chính sách hỗ trợ người lao động và người sử dụng lao động gặp khó khăn do đại dịch COVID-19 (sau đây gọi tắt là Nghị quyết số 68/NQ-CP) do không tham gia Bảo hiểm xã hội bắt buộc.<br>4. Có xác nhận làm việc tại cơ sở giáo dục ngoài công lập ít nhất hết năm học 2021 - 2022 theo kế hoạch năm học của địa phương, bao gồm cơ sở giáo dục ngoài công lập đã làm việc trước đây hoặc cơ sở giáo dục ngoài công lập khác trong trường hợp cơ sở giáo dục ngoài công lập trước đây làm việc không hoạt động trở lại.</code> | | <code>Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào?</code> | <code>Nguyên tắc áp dụng<br>1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một mức phụ cấp ưu đãi theo nghề cao nhất.<br>2. Công chức, viên chức đã hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày 22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số 64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### train * Dataset: train * Size: 13,357 evaluation samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 24.61 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 202.71 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | anchor | positive | |:-------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Toà án cấp nào có thẩm quyền giải quyết việc đòi tài sản đã cho người khác vay theo hợp đồng cho vay?</code> | <code>"Điều 35. Thẩm quyền của Tòa án nhân dân cấp huyện<br>1. Tòa án nhân dân cấp huyện có thẩm quyền giải quyết theo thủ tục sơ thẩm những tranh chấp sau đây:<br>a) Tranh chấp về dân sự, hôn nhân và gia đình quy định tại Điều 26 và Điều 28 của Bộ luật này, trừ tranh chấp quy định tại khoản 7 Điều 26 của Bộ luật này;<br>b) Tranh chấp về kinh doanh, thương mại quy định tại khoản 1 Điều 30 của Bộ luật này;<br>c) Tranh chấp về lao động quy định tại Điều 32 của Bộ luật này.<br>2. Tòa án nhân dân cấp huyện có thẩm quyền giải quyết những yêu cầu sau đây:<br>a) Yêu cầu về dân sự quy định tại các khoản 1, 2, 3, 4, 6, 7, 8, 9 và 10 Điều 27 của Bộ luật này;<br>b) Yêu cầu về hôn nhân và gia đình quy định tại các khoản 1, 2, 3, 4, 5, 6, 7, 8, 10 và 11 Điều 29 của Bộ luật này;<br>c) Yêu cầu về kinh doanh, thương mại quy định tại khoản 1 và khoản 6 Điều 31 của Bộ luật này;<br>d) Yêu cầu về lao động quy định tại khoản 1 và khoản 5 Điều 33 của Bộ luật này.<br>3. Những tranh chấp, yêu cầu quy định tại khoản 1 và khoản 2 Điều này mà có đương sự hoặc tài sản ở nước ngoài hoặc cần phải ủy thác tư pháp cho cơ quan đại diện nước Cộng hòa xã hội chủ nghĩa Việt Nam ở nước ngoài, cho Tòa án, cơ quan có thẩm quyền của nước ngoài không thuộc thẩm quyền giải quyết của Tòa án nhân dân cấp huyện, trừ trường hợp quy định tại khoản 4 Điều này.<br>4. Tòa án nhân dân cấp huyện nơi cư trú của công dân Việt Nam hủy việc kết hôn trái pháp luật, giải quyết việc ly hôn, các tranh chấp về quyền và nghĩa vụ của vợ chồng, cha mẹ và con, về nhận cha, mẹ, con, nuôi con nuôi và giám hộ giữa công dân Việt Nam cư trú ở khu vực biên giới với công dân của nước láng giềng cùng cư trú ở khu vực biên giới với Việt Nam theo quy định của Bộ luật này và các quy định khác của pháp luật Việt Nam."</code> | | <code>Những phiếu bầu nào được xem là không hợp lệ?</code> | <code>Phiếu bầu không hợp lệ<br>1. Những phiếu bầu sau đây là phiếu bầu không hợp lệ:<br>a) Phiếu không theo mẫu quy định do Tổ bầu cử phát ra;<br>b) Phiếu không có dấu của Tổ bầu cử;<br>c) Phiếu để số người được bầu nhiều hơn số lượng đại biểu được bầu đã ấn định cho đơn vị bầu cử;<br>d) Phiếu gạch xóa hết tên những người ứng cử;<br>đ) Phiếu ghi thêm tên người ngoài danh sách những người ứng cử hoặc phiếu có ghi thêm nội dung khác.<br>2. Trường hợp có phiếu bầu được cho là không hợp lệ thì Tổ trường Tổ bầu cử đưa ra để toàn Tổ xem xét, quyết định. Tổ bầu cử không được gạch xóa hoặc sửa các tên ghi trên phiếu bầu.</code> | | <code>Đề nghị tạm đình chỉ chấp hành quyết định áp dụng biện pháp đưa vào trường giáo dưỡng cho học sinh cần đảm bảo nguyên tắc gì?</code> | <code>Nguyên tắc xét duyệt, đề nghị giảm thời hạn, tạm đình chỉ chấp hành quyết định, miễn chấp hành phần thời gian còn lại cho học sinh trường giáo dưỡng, trại viên cơ sở giáo dục bắt buộc<br>1. Tuân thủ quy định của pháp luật về thi hành biện pháp xử lý hành chính đưa vào trường giáo dưỡng, cơ sở giáo dục bắt buộc, quy định tại Thông tư này và quy định của pháp luật có liên quan.<br>2. Bảo đảm khách quan, công khai, minh bạch, đúng trình tự, thủ tục, thẩm quyền; tôn trọng và bảo vệ quyền, lợi ích hợp pháp của học sinh trường giáo dưỡng, trại viên cơ sở giáo dục bắt buộc.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 4 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | train loss | |:------:|:-----:|:-------------:|:----------:| | 0.0665 | 500 | 0.2809 | 0.2215 | | 0.1331 | 1000 | 0.1307 | 0.1547 | | 0.1996 | 1500 | 0.0978 | 0.1366 | | 0.2662 | 2000 | 0.1054 | 0.1221 | | 0.3327 | 2500 | 0.0824 | 0.1215 | | 0.3993 | 3000 | 0.0776 | 0.1223 | | 0.4658 | 3500 | 0.0797 | 0.1161 | | 0.5323 | 4000 | 0.0774 | 0.1070 | | 0.5989 | 4500 | 0.0661 | 0.1007 | | 0.6654 | 5000 | 0.059 | 0.0945 | | 0.7320 | 5500 | 0.0674 | 0.0889 | | 0.7985 | 6000 | 0.0495 | 0.0783 | | 0.8651 | 6500 | 0.0587 | 0.0822 | | 0.9316 | 7000 | 0.0585 | 0.0868 | | 0.9981 | 7500 | 0.0482 | 0.0733 | | 1.0647 | 8000 | 0.0459 | 0.0786 | | 1.1312 | 8500 | 0.0487 | 0.0691 | | 1.1978 | 9000 | 0.0335 | 0.0719 | | 1.2643 | 9500 | 0.0365 | 0.0711 | | 1.3308 | 10000 | 0.0279 | 0.0668 | | 1.3974 | 10500 | 0.0235 | 0.0675 | | 1.4639 | 11000 | 0.0206 | 0.0599 | | 1.5305 | 11500 | 0.0175 | 0.0653 | | 1.5970 | 12000 | 0.0144 | 0.0664 | | 1.6636 | 12500 | 0.0167 | 0.0598 | | 1.7301 | 13000 | 0.0173 | 0.0583 | | 1.7966 | 13500 | 0.0127 | 0.0540 | | 1.8632 | 14000 | 0.0164 | 0.0595 | | 1.9297 | 14500 | 0.014 | 0.0552 | | 1.9963 | 15000 | 0.0114 | 0.0535 | | 2.0628 | 15500 | 0.0097 | 0.0552 | | 2.1294 | 16000 | 0.0111 | 0.0549 | | 2.1959 | 16500 | 0.0076 | 0.0544 | | 2.2624 | 17000 | 0.009 | 0.0589 | | 2.3290 | 17500 | 0.0084 | 0.0543 | | 2.3955 | 18000 | 0.0049 | 0.0520 | | 2.4621 | 18500 | 0.0068 | 0.0505 | | 2.5286 | 19000 | 0.0037 | 0.0489 | | 2.5952 | 19500 | 0.0031 | 0.0461 | | 2.6617 | 20000 | 0.0041 | 0.0496 | | 2.7282 | 20500 | 0.0051 | 0.0464 | | 2.7948 | 21000 | 0.0029 | 0.0475 | | 2.8613 | 21500 | 0.0032 | 0.0458 | | 2.9279 | 22000 | 0.003 | 0.0449 | | 2.9944 | 22500 | 0.0035 | 0.0458 | | 3.0610 | 23000 | 0.0033 | 0.0443 | | 3.1275 | 23500 | 0.0032 | 0.0416 | | 3.1940 | 24000 | 0.002 | 0.0449 | | 3.2606 | 24500 | 0.0022 | 0.0447 | | 3.3271 | 25000 | 0.0017 | 0.0430 | | 3.3937 | 25500 | 0.002 | 0.0418 | | 3.4602 | 26000 | 0.0019 | 0.0415 | | 3.5268 | 26500 | 0.0008 | 0.0406 | | 3.5933 | 27000 | 0.0007 | 0.0414 | | 3.6598 | 27500 | 0.0008 | 0.0416 | | 3.7264 | 28000 | 0.0011 | 0.0418 | | 3.7929 | 28500 | 0.0006 | 0.0416 | | 3.8595 | 29000 | 0.0005 | 0.0417 | | 3.9260 | 29500 | 0.0007 | 0.0413 | | 3.9925 | 30000 | 0.0008 | 0.0412 | ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.2.1 - Transformers: 4.45.1 - PyTorch: 2.4.0 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "CHIA" ]
Non_BioNLP
# SentenceTransformer based on VoVanPhuc/sup-SimCSE-VietNamese-phobert-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) <!-- at revision 608779b86741a8acd8c8d38132974ff04086b138 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Merikatorihuhu/SimCSE-finetuned-vietnamese-legal-documents") # Run inference sentences = [ 'NLĐ ký nhiều hợp đồng lao động thì đóng BHYT như thế nào?', '"Sửa đổi, bổ sung một số điều của Luật bảo hiểm y tế:\n...\n6. Sửa đổi, bổ sung Điều 12 như sau:\n“Điều 12. Đối tượng tham gia bảo hiểm y tế\n1. Nhóm do người lao động và người sử dụng lao động đóng, bao gồm:\na) Người lao động làm việc theo hợp đồng lao động không xác định thời hạn, hợp đồng lao động có thời hạn từ đủ 3 tháng trở lên; người lao động là người quản lý doanh nghiệp hưởng tiền lương; cán bộ, công chức, viên chức (sau đây gọi chung là người lao động);\nb) Người hoạt động không chuyên trách ở xã, phường, thị trấn theo quy định của pháp luật.=\n...\n4. Nhóm được ngân sách nhà nước hỗ trợ mức đóng, bao gồm:\na) Người thuộc hộ gia đình cận nghèo;\nb) Học sinh, sinh viên.\n5. Nhóm tham gia bảo hiểm y tế theo hộ gia đình gồm những người thuộc hộ gia đình, trừ đối tượng quy định tại các khoản 1, 2, 3 và 4 Điều này.\n6. Chính phủ quy định các đối tượng khác ngoài các đối tượng quy định tại các khoản 3, 4 và 5 Điều này; quy định việc cấp thẻ bảo hiểm y tế đối với đối tượng do Bộ Quốc phòng, Bộ Công an quản lý và đối tượng quy định tại điểm 1 khoản 3 Điều này; quy định lộ trình thực hiện bảo hiểm y tế, phạm vi quyền lợi, mức hưởng bảo hiểm y tế, khám bệnh, chữa bệnh bảo hiểm y tế, quản lý, sử dụng phần kinh phí dành cho khám bệnh, chữa bệnh bảo hiểm y tế, giám định bảo hiểm y tế, thanh toán, quyết toán bảo hiểm y tế đối với các đối tượng quy định tại điểm a khoản 3 Điều này.”', 'Hồ sơ, thủ tục xác định trường hợp được bồi thường\n[...]\n3. Trong thời hạn 05 ngày làm việc, kể từ ngày nhận được đơn và các giấy tờ hợp lệ, nếu xác định yêu cầu thuộc trách nhiệm giải quyết của mình thì Sở Y tế phải thụ lý và thông báo bằng văn bản về việc thụ lý đơn cho người bị thiệt hại hoặc thân nhân của người bị thiệt hại (sau đây gọi tắt là người bị thiệt hại). Trường hợp hồ sơ không đầy đủ thì Sở Y tế có văn bản hướng dẫn người bị thiệt hại bổ sung.\n4. Trong thời hạn 15 ngày, kể từ ngày nhận được đơn yêu cầu của người bị thiệt hại, Sở Y tế phải hoàn thành việc xác định nguyên nhân gây tai biến, mức độ tổn thương và thông báo bằng văn bản cho người yêu cầu đồng thời báo cáo Bộ Y tế.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 120,210 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 25.08 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 206.98 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | anchor | positive | |:--------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật<br>Trong phạm vi điều chỉnh của văn bản quy phạm pháp luật:<br>1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.<br>2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.<br>3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> | | <code>Điều kiện để giáo viên trong cơ sở giáo dục mầm non, tiểu học ngoài công lập bị ảnh hưởng bởi Covid-19 được hưởng chính sách hỗ trợ là gì?</code> | <code>Điều kiện được hưởng<br>Cán bộ quản lý, giáo viên, nhân viên được hưởng chính sách khi bảo đảm các điều kiện sau:<br>1. Là người đang làm việc tại cơ sở giáo dục ngoài công lập trước khi cơ sở phải tạm dừng hoạt động theo yêu cầu của cơ quan nhà nước có thẩm quyền để phòng, chống dịch COVID-19 tính từ ngày 01 tháng 5 năm 2021 đến hết ngày 31 tháng 12 năm 2021.<br>2. Nghỉ việc không hưởng lương từ 01 tháng trở lên tính từ ngày 01 tháng 5 năm 2021 đến hết ngày 31 tháng 12 năm 2021.<br>3. Chưa được hưởng chính sách hỗ trợ đối với người lao động tạm hoãn hợp đồng lao động, nghỉ việc không hưởng lương theo quy định tại khoản 4, khoản 5, khoản 6 Mục II Nghị quyết số 68/NQ-CP ngày 01 tháng 7 năm 2021 của Chính phủ về một số chính sách hỗ trợ người lao động và người sử dụng lao động gặp khó khăn do đại dịch COVID-19, Nghị quyết số 126/NQ-CP ngày 08 tháng 10 năm 2021 của Chính phủ sửa đổi, bổ sung Nghị quyết số 68/NQ-CP ngày 01 tháng 7 năm 2021 của Chính phủ về một số chính sách hỗ trợ người lao động và người sử dụng lao động gặp khó khăn do đại dịch COVID-19 (sau đây gọi tắt là Nghị quyết số 68/NQ-CP) do không tham gia Bảo hiểm xã hội bắt buộc.<br>4. Có xác nhận làm việc tại cơ sở giáo dục ngoài công lập ít nhất hết năm học 2021 - 2022 theo kế hoạch năm học của địa phương, bao gồm cơ sở giáo dục ngoài công lập đã làm việc trước đây hoặc cơ sở giáo dục ngoài công lập khác trong trường hợp cơ sở giáo dục ngoài công lập trước đây làm việc không hoạt động trở lại.</code> | | <code>Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào?</code> | <code>Nguyên tắc áp dụng<br>1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một mức phụ cấp ưu đãi theo nghề cao nhất.<br>2. Công chức, viên chức đã hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày 22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số 64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### train * Dataset: train * Size: 13,357 evaluation samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 24.61 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 202.71 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | anchor | positive | |:-------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Toà án cấp nào có thẩm quyền giải quyết việc đòi tài sản đã cho người khác vay theo hợp đồng cho vay?</code> | <code>"Điều 35. Thẩm quyền của Tòa án nhân dân cấp huyện<br>1. Tòa án nhân dân cấp huyện có thẩm quyền giải quyết theo thủ tục sơ thẩm những tranh chấp sau đây:<br>a) Tranh chấp về dân sự, hôn nhân và gia đình quy định tại Điều 26 và Điều 28 của Bộ luật này, trừ tranh chấp quy định tại khoản 7 Điều 26 của Bộ luật này;<br>b) Tranh chấp về kinh doanh, thương mại quy định tại khoản 1 Điều 30 của Bộ luật này;<br>c) Tranh chấp về lao động quy định tại Điều 32 của Bộ luật này.<br>2. Tòa án nhân dân cấp huyện có thẩm quyền giải quyết những yêu cầu sau đây:<br>a) Yêu cầu về dân sự quy định tại các khoản 1, 2, 3, 4, 6, 7, 8, 9 và 10 Điều 27 của Bộ luật này;<br>b) Yêu cầu về hôn nhân và gia đình quy định tại các khoản 1, 2, 3, 4, 5, 6, 7, 8, 10 và 11 Điều 29 của Bộ luật này;<br>c) Yêu cầu về kinh doanh, thương mại quy định tại khoản 1 và khoản 6 Điều 31 của Bộ luật này;<br>d) Yêu cầu về lao động quy định tại khoản 1 và khoản 5 Điều 33 của Bộ luật này.<br>3. Những tranh chấp, yêu cầu quy định tại khoản 1 và khoản 2 Điều này mà có đương sự hoặc tài sản ở nước ngoài hoặc cần phải ủy thác tư pháp cho cơ quan đại diện nước Cộng hòa xã hội chủ nghĩa Việt Nam ở nước ngoài, cho Tòa án, cơ quan có thẩm quyền của nước ngoài không thuộc thẩm quyền giải quyết của Tòa án nhân dân cấp huyện, trừ trường hợp quy định tại khoản 4 Điều này.<br>4. Tòa án nhân dân cấp huyện nơi cư trú của công dân Việt Nam hủy việc kết hôn trái pháp luật, giải quyết việc ly hôn, các tranh chấp về quyền và nghĩa vụ của vợ chồng, cha mẹ và con, về nhận cha, mẹ, con, nuôi con nuôi và giám hộ giữa công dân Việt Nam cư trú ở khu vực biên giới với công dân của nước láng giềng cùng cư trú ở khu vực biên giới với Việt Nam theo quy định của Bộ luật này và các quy định khác của pháp luật Việt Nam."</code> | | <code>Những phiếu bầu nào được xem là không hợp lệ?</code> | <code>Phiếu bầu không hợp lệ<br>1. Những phiếu bầu sau đây là phiếu bầu không hợp lệ:<br>a) Phiếu không theo mẫu quy định do Tổ bầu cử phát ra;<br>b) Phiếu không có dấu của Tổ bầu cử;<br>c) Phiếu để số người được bầu nhiều hơn số lượng đại biểu được bầu đã ấn định cho đơn vị bầu cử;<br>d) Phiếu gạch xóa hết tên những người ứng cử;<br>đ) Phiếu ghi thêm tên người ngoài danh sách những người ứng cử hoặc phiếu có ghi thêm nội dung khác.<br>2. Trường hợp có phiếu bầu được cho là không hợp lệ thì Tổ trường Tổ bầu cử đưa ra để toàn Tổ xem xét, quyết định. Tổ bầu cử không được gạch xóa hoặc sửa các tên ghi trên phiếu bầu.</code> | | <code>Đề nghị tạm đình chỉ chấp hành quyết định áp dụng biện pháp đưa vào trường giáo dưỡng cho học sinh cần đảm bảo nguyên tắc gì?</code> | <code>Nguyên tắc xét duyệt, đề nghị giảm thời hạn, tạm đình chỉ chấp hành quyết định, miễn chấp hành phần thời gian còn lại cho học sinh trường giáo dưỡng, trại viên cơ sở giáo dục bắt buộc<br>1. Tuân thủ quy định của pháp luật về thi hành biện pháp xử lý hành chính đưa vào trường giáo dưỡng, cơ sở giáo dục bắt buộc, quy định tại Thông tư này và quy định của pháp luật có liên quan.<br>2. Bảo đảm khách quan, công khai, minh bạch, đúng trình tự, thủ tục, thẩm quyền; tôn trọng và bảo vệ quyền, lợi ích hợp pháp của học sinh trường giáo dưỡng, trại viên cơ sở giáo dục bắt buộc.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 4 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | train loss | |:------:|:-----:|:-------------:|:----------:| | 0.0665 | 500 | 0.2809 | 0.2215 | | 0.1331 | 1000 | 0.1307 | 0.1547 | | 0.1996 | 1500 | 0.0978 | 0.1366 | | 0.2662 | 2000 | 0.1054 | 0.1221 | | 0.3327 | 2500 | 0.0824 | 0.1215 | | 0.3993 | 3000 | 0.0776 | 0.1223 | | 0.4658 | 3500 | 0.0797 | 0.1161 | | 0.5323 | 4000 | 0.0774 | 0.1070 | | 0.5989 | 4500 | 0.0661 | 0.1007 | | 0.6654 | 5000 | 0.059 | 0.0945 | | 0.7320 | 5500 | 0.0674 | 0.0889 | | 0.7985 | 6000 | 0.0495 | 0.0783 | | 0.8651 | 6500 | 0.0587 | 0.0822 | | 0.9316 | 7000 | 0.0585 | 0.0868 | | 0.9981 | 7500 | 0.0482 | 0.0733 | | 1.0647 | 8000 | 0.0459 | 0.0786 | | 1.1312 | 8500 | 0.0487 | 0.0691 | | 1.1978 | 9000 | 0.0335 | 0.0719 | | 1.2643 | 9500 | 0.0365 | 0.0711 | | 1.3308 | 10000 | 0.0279 | 0.0668 | | 1.3974 | 10500 | 0.0235 | 0.0675 | | 1.4639 | 11000 | 0.0206 | 0.0599 | | 1.5305 | 11500 | 0.0175 | 0.0653 | | 1.5970 | 12000 | 0.0144 | 0.0664 | | 1.6636 | 12500 | 0.0167 | 0.0598 | | 1.7301 | 13000 | 0.0173 | 0.0583 | | 1.7966 | 13500 | 0.0127 | 0.0540 | | 1.8632 | 14000 | 0.0164 | 0.0595 | | 1.9297 | 14500 | 0.014 | 0.0552 | | 1.9963 | 15000 | 0.0114 | 0.0535 | | 2.0628 | 15500 | 0.0097 | 0.0552 | | 2.1294 | 16000 | 0.0111 | 0.0549 | | 2.1959 | 16500 | 0.0076 | 0.0544 | | 2.2624 | 17000 | 0.009 | 0.0589 | | 2.3290 | 17500 | 0.0084 | 0.0543 | | 2.3955 | 18000 | 0.0049 | 0.0520 | | 2.4621 | 18500 | 0.0068 | 0.0505 | | 2.5286 | 19000 | 0.0037 | 0.0489 | | 2.5952 | 19500 | 0.0031 | 0.0461 | | 2.6617 | 20000 | 0.0041 | 0.0496 | | 2.7282 | 20500 | 0.0051 | 0.0464 | | 2.7948 | 21000 | 0.0029 | 0.0475 | | 2.8613 | 21500 | 0.0032 | 0.0458 | | 2.9279 | 22000 | 0.003 | 0.0449 | | 2.9944 | 22500 | 0.0035 | 0.0458 | | 3.0610 | 23000 | 0.0033 | 0.0443 | | 3.1275 | 23500 | 0.0032 | 0.0416 | | 3.1940 | 24000 | 0.002 | 0.0449 | | 3.2606 | 24500 | 0.0022 | 0.0447 | | 3.3271 | 25000 | 0.0017 | 0.0430 | | 3.3937 | 25500 | 0.002 | 0.0418 | | 3.4602 | 26000 | 0.0019 | 0.0415 | | 3.5268 | 26500 | 0.0008 | 0.0406 | | 3.5933 | 27000 | 0.0007 | 0.0414 | | 3.6598 | 27500 | 0.0008 | 0.0416 | | 3.7264 | 28000 | 0.0011 | 0.0418 | | 3.7929 | 28500 | 0.0006 | 0.0416 | | 3.8595 | 29000 | 0.0005 | 0.0417 | | 3.9260 | 29500 | 0.0007 | 0.0413 | | 3.9925 | 30000 | 0.0008 | 0.0412 | ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.2.1 - Transformers: 4.45.1 - PyTorch: 2.4.0 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "VoVanPhuc/sup-SimCSE-VietNamese-phobert-base", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:120210", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Chủ tịch Ủy ban nhân dân xã có quyền ra quyết định cưỡng chế tháo dỡ công trình xây dựng trên đất nông nghiệp khi chưa chuyển mục đích sử dụng đất hay không?", "sentences": ["Đối tượng, điều kiện kéo dài tuổi phục vụ tại ngũ\n1. Đối tượng:\na) Quân nhân chuyên nghiệp có trình độ cao đẳng trở lên đang đảm nhiệm các chức danh: Kỹ thuật viên, Nhân viên Kỹ thuật, Huấn luyện viên, Nghệ sĩ, Nhạc sĩ, Diễn viên làm việc đúng chuyên ngành đào tạo ở các cơ sở nghiên cứu, nhà trường, bệnh viện, trung tâm thể dục thể thao, đoàn nghệ thuật, nhà máy, doanh nghiệp quốc phòng; đơn vị đóng quân ở địa bàn vùng sâu, vùng xa, biên giới, hải đảo.\nb) Quân nhân chuyên nghiệp đang làm việc thuộc các chuyên ngành hẹp được đào tạo công phu hoặc chuyên ngành Quân đội chưa đào tạo được; thợ bậc cao.\nc) Quân nhân chuyên nghiệp đang đảm nhiệm chức vụ chỉ huy, quản lý ở các nhà máy, doanh nghiệp quốc phòng.\nd) Quân nhân chuyên nghiệp không thuộc đối tượng quy định tại điểm a, điểm b, điểm c khoản này do Bộ trưởng Bộ Quốc phòng quyết định.\n2. Điều kiện:\nQuân nhân chuyên nghiệp thuộc đối tượng quy định tại khoản 1 Điều này được kéo dài tuổi phục vụ tại ngũ khi có đủ các điều kiện sau:\na) Đơn vị có biên chế và nhu cầu sử dụng;\nb) Hết hạn tuổi phục vụ tại ngũ cao nhất theo cấp bậc quân hàm quy định tại khoản 2 Điều 17 Luật Quân nhân chuyên nghiệp, công nhân và viên chức quốc phòng; chưa có người thay thế; tự nguyện tiếp tục phục vụ tại ngũ;\nc) Có đủ phẩm chất chính trị, đạo đức, sức khỏe để hoàn thành nhiệm vụ được giao;\nd) Có trình độ chuyên môn kỹ thuật, nghiệp vụ giỏi; tay nghề cao; chất lượng, hiệu quả công tác tốt.", "Thi hành quyết định cưỡng chế\n1. Người ra quyết định cưỡng chế có trách nhiệm gửi ngay quyết định cưỡng chế cho các cá nhân, tổ chức liên quan và tổ chức thực hiện việc cưỡng chế thi hành quyết định xử phạt của mình và của cấp dưới.\n...\"", "Trình tự, thủ tục đăng ký tài khoản định danh điện tử đối với công dân Việt Nam\n1. Đăng ký tài khoản định danh điện tử mức độ 1 qua ứng dụng VNelD đối với công dân đã có thẻ Căn cước công dân gắn chíp điện tử\na) Công dân sử dụng thiết bị di động tải và cài đặt ứng dụng VNelD.\nb) Công dân sử dụng ứng dụng VNelD để nhập thông tin về số định danh cá nhân và số điện thoại hoặc địa chỉ thư điện tử; cung cấp các thông tin theo hướng dẫn trên ứng dụng VNelD; thu nhận ảnh chân dung bằng thiết bị di động và gửi yêu cầu đề nghị cấp tài khoản định danh điện tử tới cơ quan quản lý định danh và xác thực điện tử qua ứng dụng VNelD.\nc) Cơ quan quản lý định danh điện tử thông báo kết quả đăng ký tài khoản qua ứng dụng VNelD hoặc tin nhắn SMS hoặc địa chỉ thư điện tử.\n2. Đăng ký tài khoản định danh điện tử mức độ 2\na) Đối với công dân đã được cấp thẻ Căn cước công dân gắn chíp điện tử:\nCông dân đến Công an xã, phường, thị trấn hoặc nơi làm thủ tục cấp thẻ Căn cước công dân để làm thủ tục cấp tài khoản định danh điện tử. Công dân xuất trình thẻ Căn cước công dân gắn chíp điện tử, cung cấp thông tin về số điện thoại hoặc địa chỉ thư điện tử và đề nghị bổ sung thông tin được tích hợp vào tài khoản định danh điện tử.\nCán bộ tiếp nhận nhập thông tin công dân cung cấp vào hệ thống định danh và xác thực điện tử; chụp ảnh chân dung, thu nhận vân tay của công dân đến làm thủ tục để xác thực với Cơ sở dữ liệu căn cước công dân và khẳng định sự đồng ý đăng ký tạo lập tài khoản định danh điện tử.\nCơ quan quản lý định danh điện tử thông báo kết quả đăng ký tài khoản qua ứng dụng VNelD hoặc tin nhắn SMS hoặc địa chỉ thư điện tử.\nb) Cơ quan Công an tiến hành cấp tài khoản định danh điện tử mức độ 2 cùng với cấp thẻ Căn cước công dân với trường hợp công dân chưa được cấp Căn cước công dân gắn chíp điện tử."]}, {"source_sentence": "Mức hưởng chế độ thai sản đối với lao động nam là người nước ngoài được pháp luật quy định như thế nào?", "sentences": ["\"Điều 21. Thông báo kết quả và xác nhận nhập học\n1. Cơ sở đào tạo gửi giấy báo trúng tuyển cho những thí sinh trúng tuyển, trong đó ghi rõ những thủ tục cần thiết đối với thí sinh khi nhập học và phương thức nhập học của thí sinh.\n2. Thí sinh xác nhận nhập học bằng hình thức trực tuyến trên hệ thống, trước khi nhập học tại cơ sở đào tạo.\n3. Đối với những thí sinh không xác nhận nhập học trong thời hạn quy định:\na) Nếu không có lý do chính đáng thì coi như thí sinh từ chối nhập học và cơ sở đào tạo có quyền không tiếp nhận;\nb) Nếu do ốm đau, tai nạn, có giấy xác nhận của bệnh viện quận, huyện trở lên hoặc do thiên tai có xác nhận của UBND quận, huyện trở lên, cơ sở đào tạo xem xét quyết định tiếp nhận thí sinh vào học hoặc bảo lưu kết quả tuyển sinh để thí sinh vào học sau;\nc) Nếu do sai sót, nhầm lẫn của cán bộ thực hiện công tác tuyển sinh hoặc cá nhân thí sinh gây ra, cơ sở đào tạo chủ động phối hợp với các cá nhân, tổ chức liên quan xem xét các minh chứng và quyết định việc tiếp nhận thí sinh vào học hoặc bảo lưu kết quả tuyển sinh để thí sinh vào học sau.\n4. Thí sinh đã xác nhận nhập học tại một cơ sở đào tạo không được tham gia xét tuyển ở nơi khác hoặc ở các đợt xét tuyển bổ sung, trừ trường hợp được cơ sở đào tạo cho phép.\"", "Tổ chức, nhiệm vụ, quyền hạn của Ban Chỉ huy\n...\n2. Nhiệm vụ, quyền hạn của Ban Chỉ huy:\na) Chỉ đạo xây dựng, ban hành quy định về công tác bảo đảm an toàn PCCC và CNCH tại Trụ sở cơ quan Bộ Tư pháp.\nb) Hướng dẫn, phối hợp với các đơn vị thuộc Bộ và chỉ đạo Đội PCCC và CNCH cơ sở tổ chức tuyên truyền, bồi dưỡng nghiệp vụ PCCC và CNCH.\nc) Chỉ đạo Đội PCCC và CNCH cơ sở tại Trụ sở cơ quan Bộ Tư pháp xây dựng, trình cấp có thẩm quyền phê duyệt và tổ chức thực tập phương án PCCC, phương án CNCH.\nd) Chỉ đạo Đội PCCC và CNCH cơ sở tại Trụ sở cơ quan Bộ Tư pháp quản lý các trang thiết bị PCCC và CNCH.\nđ) Chỉ đạo chữa cháy, CNCH khi xảy ra cháy, sự cố, tai nạn tại Trụ sở cơ quan Bộ Tư pháp.\ne) Chỉ đạo việc tổ chức lập và lưu giữ hồ sơ quản lý, theo dõi hoạt động PCCC, CNCH tại Trụ sở cơ quan Bộ Tư pháp.\ng) Chỉ đạo việc sơ kết, tổng kết các hoạt động về PCCC và CNCH của cơ quan; kiểm tra, đôn đốc việc chấp hành các quy định về PCCC và CNCH.\nh) Đề xuất việc khen thưởng, kỷ luật các tập thể, cá nhân trong việc thực hiện công tác PCCC, CNCH.\ni) Chỉ đạo Đội PCCC và CNCH cơ sở dự trù kinh phí cho các hoạt động PCCC và CNCH tại Trụ sở cơ quan Bộ Tư pháp.\nk) Thực hiện các nhiệm vụ khác do Bộ trưởng giao và theo quy định của pháp luật.", "Mức hưởng chế độ thai sản\n...\nb) Mức hưởng một ngày đối với trường hợp quy định tại Điều 32 và khoản 2 Điều 34 của Luật này được tính bằng mức hưởng chế độ thai sản theo tháng chia cho 24 ngày."]}, {"source_sentence": "Doanh nghiệp được áp dụng chế độ ưu tiên không cung cấp báo cáo kiểm toán đúng thời hạn bị phạt bao nhiêu tiền?", "sentences": ["Thay đổi Thẩm phán, Hội thẩm\n1. Thẩm phán, Hội thẩm phải từ chối tham gia xét xử hoặc bị thay đổi khi thuộc một trong các trường hợp:\na) Trường hợp quy định tại Điều 49 của Bộ luật này;\nb) Họ cùng trong một Hội đồng xét xử và là người thân thích với nhau;\nc) Đã tham gia xét xử sơ thẩm hoặc phúc thẩm hoặc tiến hành tố tụng vụ án đó với tư cách là Điều tra viên, Cán bộ điều tra, Kiểm sát viên, Kiểm tra viên, Thẩm tra viên, Thư ký Tòa án.\n2. Việc thay đổi Thẩm phán, Hội thẩm trước khi mở phiên tòa do Chánh án hoặc Phó Chánh án Tòa án được phân công giải quyết vụ án quyết định.\nThẩm phán bị thay đổi là Chánh án Tòa án thì do Chánh án Tòa án trên một cấp quyết định.\nViệc thay đổi Thẩm phán, Hội thẩm tại phiên tòa do Hội đồng xét xử quyết định trước khi bắt đầu xét hỏi bằng cách biểu quyết tại phòng nghị án. Khi xem xét thay đổi thành viên nào thì thành viên đó được trình bày ý kiến của mình, Hội đồng quyết định theo đa số.\nTrường hợp phải thay đổi Thẩm phán, Hội thẩm tại phiên tòa thì Hội đồng xét xử ra quyết định hoãn phiên tòa.", "“Điều 21. Chấm dứt hưởng trợ cấp thất nghiệp\n1. Các trường hợp người lao động đang hưởng trợ cấp thất nghiệp bị chấm dứt hưởng trợ cấp thất nghiệp được quy định như sau:\ne) Trong thời gian hưởng trợ cấp thất nghiệp, 03 tháng liên tục không thực hiện thông báo hằng tháng về việc tìm kiếm việc làm với trung tâm dịch vụ việc làm theo quy định\nNgày mà người lao động được xác định bị chấm dứt hưởng trợ cấp thất nghiệp là ngày kết thúc của thời hạn thông báo tìm kiếm việc làm của tháng thứ 3 liên tục mà người lao động không thực hiện thông báo hằng tháng về việc tìm kiếm việc làm.\"", "Vi phạm quy định về thời hạn làm thủ tục hải quan, nộp hồ sơ thuế\n...\n2. Phạt tiền từ 1.000.000 đồng đến 2.000.000 đồng đối với hành vi không thực hiện đúng thời hạn quy định thuộc một trong các trường hợp sau:\na) Cung cấp báo cáo kiểm toán, báo cáo tài chính của doanh nghiệp được áp dụng chế độ ưu tiên;\nb) Thông báo cho cơ quan hải quan quyết định xử lý vi phạm pháp luật về quản lý thuế, kế toán đối với doanh nghiệp được áp dụng chế độ ưu tiên;\nc) Báo cáo về lượng hàng hóa nhập khẩu phục vụ xây dựng nhà xưởng, hàng hóa gửi kho bên ngoài của doanh nghiệp chế xuất;\nd) Báo cáo về lượng hàng hóa trung chuyển đưa vào, đưa ra, còn lưu tại cảng;\nđ) Báo cáo thống kê thông quan hàng bưu chính đưa vào Việt Nam để chuyển tiếp đi quốc tế.\n..."]}, {"source_sentence": "Tài chính của Hội Kiểm toán viên hành nghề Việt Nam được chi cho những khoản nào?", "sentences": ["Giải thể và xử lý tài chính khi giải thể\n1. Khi xét thấy hoạt động của Hội không có hiệu quả, không mang lại lợi ích cho Hội viên hoặc gây phiền hà, cản trở cho Hội viên thì BCH Hội quyết định triệu tập Đại hội để bàn biện pháp củng cố tổ chức hoặc giải thể Hội. Nếu giải thể Hội thì do Đại hội đại biểu hoặc Đại hội toàn quốc của Hội thông qua và đề nghị cơ quan Nhà nước có thẩm quyền xem xét, quyết định.\n2. Khi Hội bị giải thể, Ban Thường trực và Ban Kiểm tra của Hội phải tiến hành kiểm kê tài sản, kiểm quỹ và báo cáo BCH Hội quyết định việc xử lý tài sản, tiền tồn quỹ và tiến hành thủ tục giải thể theo quy định của pháp luật.", "\"Điều 14. Miễn trừ đối với thỏa thuận hạn chế cạnh tranh bị cấm\n1. Thỏa thuận hạn chế cạnh tranh quy định tại các khoản 1, 2, 3, 7, 8, 9, 10 và 11 Điều 11 bị cấm theo quy định tại Điều 12 của Luật này được miễn trừ có thời hạn nếu có lợi cho người tiêu dùng và đáp ứng một trong các điều kiện sau đây:\na) Tác động thúc đẩy tiến bộ kỹ thuật, công nghệ, nâng cao chất lượng hàng hóa, dịch vụ;\nb) Tăng cường sức cạnh tranh của doanh nghiệp Việt Nam trên thị trường quốc tế;\nc) Thúc đẩy việc áp dụng thống nhất tiêu chuẩn chất lượng, định mức kỹ thuật của chủng loại sản phẩm;\nd) Thống nhất các điều kiện thực hiện hợp đồng, giao hàng, thanh toán nhưng không liên quan đến giá và các yếu tố của giá.\n2. Thỏa thuận lao động, thỏa thuận hợp tác trong các ngành, lĩnh vực đặc thù được thực hiện theo quy định của luật khác thì thực hiện theo quy định của luật đó\".", "\"Điều 2. Sửa đổi, bổ sung một số điều của Nghị định số 15/2019/NĐ-CP ngày 01 tháng 02 năm 2019 của Chính phủ quy định chi tiết một số điều và biện pháp thi hành Luật Giáo dục nghề nghiệp\n...\n12. Sửa đổi, bổ sung Điều 24 như sau:\nĐiều 24. Thẩm quyền cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với nước ngoài\n1. Tổng cục Giáo dục nghề nghiệp cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với nước ngoài đối với trường cao đẳng.\n2. Sở Lao động - Thương binh và Xã hội nơi trường trung cấp, trung tâm giáo dục nghề nghiệp, trung tâm giáo dục nghề nghiệp - giáo dục thường xuyên và doanh nghiệp tổ chức hoạt động liên kết đào tạo với nước ngoài cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với nước ngoài đối với trường trung cấp, trung tâm giáo dục nghề nghiệp, trung tâm giáo dục nghề nghiệp - giáo dục thường xuyên và doanh nghiệp.\""]}, {"source_sentence": "NLĐ ký nhiều hợp đồng lao động thì đóng BHYT như thế nào?", "sentences": ["Hồ sơ, thủ tục xác định trường hợp được bồi thường\n[...]\n3. Trong thời hạn 05 ngày làm việc, kể từ ngày nhận được đơn và các giấy tờ hợp lệ, nếu xác định yêu cầu thuộc trách nhiệm giải quyết của mình thì Sở Y tế phải thụ lý và thông báo bằng văn bản về việc thụ lý đơn cho người bị thiệt hại hoặc thân nhân của người bị thiệt hại (sau đây gọi tắt là người bị thiệt hại). Trường hợp hồ sơ không đầy đủ thì Sở Y tế có văn bản hướng dẫn người bị thiệt hại bổ sung.\n4. Trong thời hạn 15 ngày, kể từ ngày nhận được đơn yêu cầu của người bị thiệt hại, Sở Y tế phải hoàn thành việc xác định nguyên nhân gây tai biến, mức độ tổn thương và thông báo bằng văn bản cho người yêu cầu đồng thời báo cáo Bộ Y tế.", "Chuyển nhượng quyền thăm dò khoáng sản\n1. Tổ chức, cá nhân nhận chuyển nhượng quyền thăm dò khoáng sản phải có đủ điều kiện để được cấp Giấy phép thăm dò khoáng sản theo quy định của Luật này.\n2. Việc chuyển nhượng quyền thăm dò khoáng sản phải được cơ quan quản lý nhà nước có thẩm quyền cấp Giấy phép thăm dò khoáng sản chấp thuận; trường hợp được chấp thuận, tổ chức, cá nhân nhận chuyển nhượng quyền thăm dò khoáng sản được cấp Giấy phép thăm dò khoáng sản mới.\n3. Tổ chức, cá nhân chuyển nhượng quyền thăm dò khoáng sản đã thực hiện được ít nhất 50% dự toán của đề án thăm dò khoáng sản.\n4. Chính phủ quy định chi tiết việc chuyển nhượng quyền thăm dò khoáng sản.", "\"Sửa đổi, bổ sung một số điều của Luật bảo hiểm y tế:\n...\n6. Sửa đổi, bổ sung Điều 12 như sau:\n“Điều 12. Đối tượng tham gia bảo hiểm y tế\n1. Nhóm do người lao động và người sử dụng lao động đóng, bao gồm:\na) Người lao động làm việc theo hợp đồng lao động không xác định thời hạn, hợp đồng lao động có thời hạn từ đủ 3 tháng trở lên; người lao động là người quản lý doanh nghiệp hưởng tiền lương; cán bộ, công chức, viên chức (sau đây gọi chung là người lao động);\nb) Người hoạt động không chuyên trách ở xã, phường, thị trấn theo quy định của pháp luật.=\n...\n4. Nhóm được ngân sách nhà nước hỗ trợ mức đóng, bao gồm:\na) Người thuộc hộ gia đình cận nghèo;\nb) Học sinh, sinh viên.\n5. Nhóm tham gia bảo hiểm y tế theo hộ gia đình gồm những người thuộc hộ gia đình, trừ đối tượng quy định tại các khoản 1, 2, 3 và 4 Điều này.\n6. Chính phủ quy định các đối tượng khác ngoài các đối tượng quy định tại các khoản 3, 4 và 5 Điều này; quy định việc cấp thẻ bảo hiểm y tế đối với đối tượng do Bộ Quốc phòng, Bộ Công an quản lý và đối tượng quy định tại điểm 1 khoản 3 Điều này; quy định lộ trình thực hiện bảo hiểm y tế, phạm vi quyền lợi, mức hưởng bảo hiểm y tế, khám bệnh, chữa bệnh bảo hiểm y tế, quản lý, sử dụng phần kinh phí dành cho khám bệnh, chữa bệnh bảo hiểm y tế, giám định bảo hiểm y tế, thanh toán, quyết toán bảo hiểm y tế đối với các đối tượng quy định tại điểm a khoản 3 Điều này.”"]}]}
dataset
null
585
FreedomIntelligence/Apollo2-0.5B
FreedomIntelligence
question-answering
[ "safetensors", "qwen2", "biology", "medical", "question-answering", "ar", "en", "zh", "ko", "ja", "mn", "th", "vi", "lo", "mg", "de", "pt", "es", "fr", "ru", "it", "hr", "gl", "cs", "co", "la", "uk", "bs", "bg", "eo", "sq", "da", "sa", "gn", "sr", "sk", "gd", "lb", "hi", "ku", "mt", "he", "ln", "bm", "sw", "ig", "rw", "ha", "dataset:FreedomIntelligence/ApolloMoEDataset", "arxiv:2410.10626", "base_model:Qwen/Qwen2-0.5B", "base_model:finetune:Qwen/Qwen2-0.5B", "license:apache-2.0", "region:us" ]
2024-10-14T07:26:15Z
2024-11-20T03:44:01+00:00
76
1
--- base_model: - Qwen/Qwen2-0.5B datasets: - FreedomIntelligence/ApolloMoEDataset language: - ar - en - zh - ko - ja - mn - th - vi - lo - mg - de - pt - es - fr - ru - it - hr - gl - cs - co - la - uk - bs - bg - eo - sq - da - sa - gn - sr - sk - gd - lb - hi - ku - mt - he - ln - bm - sw - ig - rw - ha license: apache-2.0 metrics: - accuracy pipeline_tag: question-answering tags: - biology - medical --- # Democratizing Medical LLMs For Much More Languages Covering 12 Major Languages including English, Chinese, French, Hindi, Spanish, Arabic, Russian, Japanese, Korean, German, Italian, Portuguese and 38 Minor Languages So far. <p align="center"> 📃 <a href="https://arxiv.org/abs/2410.10626" target="_blank">Paper</a> • 🌐 <a href="" target="_blank">Demo</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEDataset" target="_blank">ApolloMoEDataset</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEBench" target="_blank">ApolloMoEBench</a> • 🤗 <a href="https://huggingface.co/collections/FreedomIntelligence/apollomoe-and-apollo2-670ddebe3bb1ba1aebabbf2c" target="_blank">Models</a> •🌐 <a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Apollo</a> • 🌐 <a href="https://github.com/FreedomIntelligence/ApolloMoE" target="_blank">ApolloMoE</a> </p> ![Apollo](assets/apollo_medium_final.png) ## 🌈 Update * **[2024.10.15]** ApolloMoE repo is published!🎉 ## Languages Coverage 12 Major Languages and 38 Minor Languages <details> <summary>Click to view the Languages Coverage</summary> ![ApolloMoE](assets/languages.png) </details> ## Architecture <details> <summary>Click to view the MoE routing image</summary> ![ApolloMoE](assets/hybrid_routing.png) </details> ## Results #### Dense 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-0.5B" target="_blank">Apollo2-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-1.5B" target="_blank">Apollo2-1.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-2B" target="_blank">Apollo2-2B</a> 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-3.8B" target="_blank">Apollo2-3.8B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-7B" target="_blank">Apollo2-7B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-9B" target="_blank">Apollo2-9B</a> <details> <summary>Click to view the Dense Models Results</summary> ![ApolloMoE](assets/dense_results.png) </details> #### Post-MoE 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-0.5B" target="_blank">Apollo-MoE-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-1.5B" target="_blank">Apollo-MoE-1.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-7B" target="_blank">Apollo-MoE-7B</a> <details> <summary>Click to view the Post-MoE Models Results</summary> ![ApolloMoE](assets/post_moe_results.png) </details> ## Usage Format ##### Apollo2 - 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|> - 2B, 9B: User:{query}\nAssistant:{response}\<eos\> - 3.8B: <|user|>\n{query}<|end|><|assisitant|>\n{response}<|end|> ##### Apollo-MoE - 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|> ## Dataset & Evaluation - Dataset 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEDataset" target="_blank">ApolloMoEDataset</a> <details><summary>Click to expand</summary> ![ApolloMoE](assets/Dataset.png) - [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train) </details> - Evaluation 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEBench" target="_blank">ApolloMoEBench</a> <details><summary>Click to expand</summary> - EN: - [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test) - [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper. - [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - ZH: - [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test) - [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper - Randomly sample 2,000 multiple-choice questions with single answer. - [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu) - Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology - [CExam](https://github.com/williamliujl/CMExam): Not used in the paper - Randomly sample 2,000 multiple-choice questions - ES: [Head_qa](https://huggingface.co/datasets/head_qa) - FR: - [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA) - [MMLU_FR] - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - AR: [MMLU_AR](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - JA: [IgakuQA](https://github.com/jungokasai/IgakuQA) - KO: [KorMedMCQA](https://huggingface.co/datasets/sean0042/KorMedMCQA) - IT: - [MedExpQA](https://huggingface.co/datasets/HiTZ/MedExpQA) - [MMLU_IT] - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - DE: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): German part - PT: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): Portuguese part - RU: [RuMedBench](https://github.com/sb-ai-lab/MedBench) </details> ## Model Download and Inference We take Apollo-MoE-0.5B as an example 1. Login Huggingface ``` huggingface-cli login --token $HUGGINGFACE_TOKEN ``` 2. Download model to local dir ```python from huggingface_hub import snapshot_download import os local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B') snapshot_download(repo_id="FreedomIntelligence/Apollo-MoE-0.5B", local_dir=local_model_dir) ``` 3. Inference Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig import os local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B') model=AutoModelForCausalLM.from_pretrained(local_model_dir,trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(local_model_dir,trust_remote_code=True) generation_config = GenerationConfig.from_pretrained(local_model_dir, pad_token_id=tokenizer.pad_token_id, num_return_sequences=1, max_new_tokens=7, min_new_tokens=2, do_sample=False, temperature=1.0, top_k=50, top_p=1.0) inputs = tokenizer('Answer direclty.\nThe capital of Mongolia is Ulaanbaatar.\nThe capital of Iceland is Reykjavik.\nThe capital of Australia is', return_tensors='pt') inputs = inputs.to(model.device) pred = model.generate(**inputs,generation_config=generation_config) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) ``` ## Results reproduction <details><summary>Click to expand</summary> We take Apollo2-7B or Apollo-MoE-0.5B as example 1. Download Dataset for project: ``` bash 0.download_data.sh  ``` 2. Prepare test and dev data for specific model: - Create test data for with special token ``` bash 1.data_process_test&dev.sh ``` 3. Prepare train data for specific model (Create tokenized data in advance): - You can adjust data Training order and Training Epoch in this step ``` bash 2.data_process_train.sh ``` 4. Train the model - If you want to train in Multi Nodes please refer to ./src/sft/training_config/zero_multi.yaml ``` bash 3.single_node_train.sh ``` 5. Evaluate your model: Generate score for benchmark ``` bash 4.eval.sh ``` </details> ## Citation Please use the following citation if you intend to use our dataset for training or evaluation: ``` @misc{zheng2024efficientlydemocratizingmedicalllms, title={Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts}, author={Guorui Zheng and Xidong Wang and Juhao Liang and Nuo Chen and Yuping Zheng and Benyou Wang}, year={2024}, eprint={2410.10626}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.10626}, } ```
[ "HEAD-QA", "MEDQA", "PUBMEDQA" ]
BioNLP
# Democratizing Medical LLMs For Much More Languages Covering 12 Major Languages including English, Chinese, French, Hindi, Spanish, Arabic, Russian, Japanese, Korean, German, Italian, Portuguese and 38 Minor Languages So far. <p align="center"> 📃 <a href="https://arxiv.org/abs/2410.10626" target="_blank">Paper</a> • 🌐 <a href="" target="_blank">Demo</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEDataset" target="_blank">ApolloMoEDataset</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEBench" target="_blank">ApolloMoEBench</a> • 🤗 <a href="https://huggingface.co/collections/FreedomIntelligence/apollomoe-and-apollo2-670ddebe3bb1ba1aebabbf2c" target="_blank">Models</a> •🌐 <a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Apollo</a> • 🌐 <a href="https://github.com/FreedomIntelligence/ApolloMoE" target="_blank">ApolloMoE</a> </p> ![Apollo](assets/apollo_medium_final.png) ## 🌈 Update * **[2024.10.15]** ApolloMoE repo is published!🎉 ## Languages Coverage 12 Major Languages and 38 Minor Languages <details> <summary>Click to view the Languages Coverage</summary> ![ApolloMoE](assets/languages.png) </details> ## Architecture <details> <summary>Click to view the MoE routing image</summary> ![ApolloMoE](assets/hybrid_routing.png) </details> ## Results #### Dense 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-0.5B" target="_blank">Apollo2-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-1.5B" target="_blank">Apollo2-1.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-2B" target="_blank">Apollo2-2B</a> 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-3.8B" target="_blank">Apollo2-3.8B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-7B" target="_blank">Apollo2-7B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-9B" target="_blank">Apollo2-9B</a> <details> <summary>Click to view the Dense Models Results</summary> ![ApolloMoE](assets/dense_results.png) </details> #### Post-MoE 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-0.5B" target="_blank">Apollo-MoE-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-1.5B" target="_blank">Apollo-MoE-1.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-7B" target="_blank">Apollo-MoE-7B</a> <details> <summary>Click to view the Post-MoE Models Results</summary> ![ApolloMoE](assets/post_moe_results.png) </details> ## Usage Format ##### Apollo2 - 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|> - 2B, 9B: User:{query}\nAssistant:{response}\<eos\> - 3.8B: <|user|>\n{query}<|end|><|assisitant|>\n{response}<|end|> ##### Apollo-MoE - 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|> ## Dataset & Evaluation - Dataset 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEDataset" target="_blank">ApolloMoEDataset</a> <details><summary>Click to expand</summary> ![ApolloMoE](assets/Dataset.png) - [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train) </details> - Evaluation 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEBench" target="_blank">ApolloMoEBench</a> <details><summary>Click to expand</summary> - EN: - [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test) - [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper. - [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - ZH: - [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test) - [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper - Randomly sample 2,000 multiple-choice questions with single answer. - [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu) - Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology - [CExam](https://github.com/williamliujl/CMExam): Not used in the paper - Randomly sample 2,000 multiple-choice questions - ES: [Head_qa](https://huggingface.co/datasets/head_qa) - FR: - [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA) - [MMLU_FR] - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - AR: [MMLU_AR](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - JA: [IgakuQA](https://github.com/jungokasai/IgakuQA) - KO: [KorMedMCQA](https://huggingface.co/datasets/sean0042/KorMedMCQA) - IT: - [MedExpQA](https://huggingface.co/datasets/HiTZ/MedExpQA) - [MMLU_IT] - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - DE: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): German part - PT: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): Portuguese part - RU: [RuMedBench](https://github.com/sb-ai-lab/MedBench) </details> ## Model Download and Inference We take Apollo-MoE-0.5B as an example 1. Login Huggingface ``` huggingface-cli login --token $HUGGINGFACE_TOKEN ``` 2. Download model to local dir ```python from huggingface_hub import snapshot_download import os local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B') snapshot_download(repo_id="FreedomIntelligence/Apollo-MoE-0.5B", local_dir=local_model_dir) ``` 3. Inference Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig import os local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B') model=AutoModelForCausalLM.from_pretrained(local_model_dir,trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(local_model_dir,trust_remote_code=True) generation_config = GenerationConfig.from_pretrained(local_model_dir, pad_token_id=tokenizer.pad_token_id, num_return_sequences=1, max_new_tokens=7, min_new_tokens=2, do_sample=False, temperature=1.0, top_k=50, top_p=1.0) inputs = tokenizer('Answer direclty.\nThe capital of Mongolia is Ulaanbaatar.\nThe capital of Iceland is Reykjavik.\nThe capital of Australia is', return_tensors='pt') inputs = inputs.to(model.device) pred = model.generate(**inputs,generation_config=generation_config) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) ``` ## Results reproduction <details><summary>Click to expand</summary> We take Apollo2-7B or Apollo-MoE-0.5B as example 1. Download Dataset for project: ``` bash 0.download_data.sh  ``` 2. Prepare test and dev data for specific model: - Create test data for with special token ``` bash 1.data_process_test&dev.sh ``` 3. Prepare train data for specific model (Create tokenized data in advance): - You can adjust data Training order and Training Epoch in this step ``` bash 2.data_process_train.sh ``` 4. Train the model - If you want to train in Multi Nodes please refer to ./src/sft/training_config/zero_multi.yaml ``` bash 3.single_node_train.sh ``` 5. Evaluate your model: Generate score for benchmark ``` bash 4.eval.sh ``` </details> ## Citation Please use the following citation if you intend to use our dataset for training or evaluation: ``` @misc{zheng2024efficientlydemocratizingmedicalllms, title={Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts}, author={Guorui Zheng and Xidong Wang and Juhao Liang and Nuo Chen and Yuping Zheng and Benyou Wang}, year={2024}, eprint={2410.10626}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.10626}, } ```
{"base_model": ["Qwen/Qwen2-0.5B"], "datasets": ["FreedomIntelligence/ApolloMoEDataset"], "language": ["ar", "en", "zh", "ko", "ja", "mn", "th", "vi", "lo", "mg", "de", "pt", "es", "fr", "ru", "it", "hr", "gl", "cs", "co", "la", "uk", "bs", "bg", "eo", "sq", "da", "sa", "gn", "sr", "sk", "gd", "lb", "hi", "ku", "mt", "he", "ln", "bm", "sw", "ig", "rw", "ha"], "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "question-answering", "tags": ["biology", "medical"]}
dataset
null
586
yongzx/pythia-160m-sft-hh
yongzx
text-generation
[ "transformers", "pytorch", "gpt_neox", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-08-23T04:08:09Z
2023-08-28T18:52:18+00:00
11
0
--- {} --- wandb run: https://wandb.ai/eleutherai/pythia-rlhf/runs/e0drjcsz?workspace=user-yongzx Model Evals: | Task |Version|Filter| Metric |Value | |Stderr| |-------------|-------|------|--------|-----:|---|-----:| |arc_challenge|Yaml |none |acc |0.1877|± |0.0114| | | |none |acc_norm|0.2372|± |0.0124| |arc_easy |Yaml |none |acc |0.4390|± |0.0102| | | |none |acc_norm|0.4082|± |0.0101| |logiqa |Yaml |none |acc |0.1889|± |0.0154| | | |none |acc_norm|0.2473|± |0.0169| |piqa |Yaml |none |acc |0.6213|± |0.0113| | | |none |acc_norm|0.6279|± |0.0113| |sciq |Yaml |none |acc |0.7230|± |0.0142| | | |none |acc_norm|0.6840|± |0.0147| |winogrande |Yaml |none |acc |0.5162|± |0.0140| |wsc |Yaml |none |acc |0.3654|± |0.0474| |lambada_openai|Yaml |none |perplexity|58.9478|± |2.7662| | | |none |acc | 0.2602|± |0.0061|
[ "SCIQ" ]
Non_BioNLP
wandb run: https://wandb.ai/eleutherai/pythia-rlhf/runs/e0drjcsz?workspace=user-yongzx Model Evals: | Task |Version|Filter| Metric |Value | |Stderr| |-------------|-------|------|--------|-----:|---|-----:| |arc_challenge|Yaml |none |acc |0.1877|± |0.0114| | | |none |acc_norm|0.2372|± |0.0124| |arc_easy |Yaml |none |acc |0.4390|± |0.0102| | | |none |acc_norm|0.4082|± |0.0101| |logiqa |Yaml |none |acc |0.1889|± |0.0154| | | |none |acc_norm|0.2473|± |0.0169| |piqa |Yaml |none |acc |0.6213|± |0.0113| | | |none |acc_norm|0.6279|± |0.0113| |sciq |Yaml |none |acc |0.7230|± |0.0142| | | |none |acc_norm|0.6840|± |0.0147| |winogrande |Yaml |none |acc |0.5162|± |0.0140| |wsc |Yaml |none |acc |0.3654|± |0.0474| |lambada_openai|Yaml |none |perplexity|58.9478|± |2.7662| | | |none |acc | 0.2602|± |0.0061|
{}
dataset
null
587
lixsh6/MegatronBert-1B3-embedding
lixsh6
null
[ "mteb", "model-index", "region:us" ]
2023-07-28T02:37:48Z
2023-07-28T02:45:51+00:00
0
0
--- tags: - mteb model-index: - name: bert_1b3_mixlang_newstep3 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 70.11940298507463 - type: ap value: 32.37756187516329 - type: f1 value: 63.92312669545795 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.950675 - type: ap value: 89.69186819088316 - type: f1 value: 92.94108521905532 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 50.522 - type: f1 value: 48.76020527037862 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 14.865 - type: map_at_10 value: 26.026 - type: map_at_100 value: 27.586 - type: map_at_1000 value: 27.622999999999998 - type: map_at_3 value: 21.859 - type: map_at_5 value: 24.049 - type: mrr_at_1 value: 15.504999999999999 - type: mrr_at_10 value: 26.265 - type: mrr_at_100 value: 27.810000000000002 - type: mrr_at_1000 value: 27.847 - type: mrr_at_3 value: 22.06 - type: mrr_at_5 value: 24.247 - type: ndcg_at_1 value: 14.865 - type: ndcg_at_10 value: 32.934999999999995 - type: ndcg_at_100 value: 40.627 - type: ndcg_at_1000 value: 41.524 - type: ndcg_at_3 value: 24.153 - type: ndcg_at_5 value: 28.133999999999997 - type: precision_at_1 value: 14.865 - type: precision_at_10 value: 5.541 - type: precision_at_100 value: 0.9159999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 10.266 - type: precision_at_5 value: 8.108 - type: recall_at_1 value: 14.865 - type: recall_at_10 value: 55.405 - type: recall_at_100 value: 91.607 - type: recall_at_1000 value: 98.506 - type: recall_at_3 value: 30.797 - type: recall_at_5 value: 40.541 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.028296913559814 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.38123118365735 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.9616553564134 - type: mrr value: 72.16033504814668 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.00899493452621 - type: cos_sim_spearman value: 83.85673000958819 - type: euclidean_pearson value: 85.65567511199598 - type: euclidean_spearman value: 83.90311660870698 - type: manhattan_pearson value: 85.37147829428248 - type: manhattan_spearman value: 83.74588411039522 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 75.5909090909091 - type: f1 value: 74.476632049175 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.981180962194216 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 34.9394829907367 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 31.277 - type: map_at_10 value: 42.153 - type: map_at_100 value: 43.683 - type: map_at_1000 value: 43.817 - type: map_at_3 value: 38.454 - type: map_at_5 value: 40.721000000000004 - type: mrr_at_1 value: 38.913 - type: mrr_at_10 value: 48.232 - type: mrr_at_100 value: 48.888 - type: mrr_at_1000 value: 48.929 - type: mrr_at_3 value: 45.279 - type: mrr_at_5 value: 47.089 - type: ndcg_at_1 value: 38.913 - type: ndcg_at_10 value: 48.518 - type: ndcg_at_100 value: 53.797 - type: ndcg_at_1000 value: 55.754999999999995 - type: ndcg_at_3 value: 43.122 - type: ndcg_at_5 value: 45.869 - type: precision_at_1 value: 38.913 - type: precision_at_10 value: 9.413 - type: precision_at_100 value: 1.567 - type: precision_at_1000 value: 0.20600000000000002 - type: precision_at_3 value: 20.791999999999998 - type: precision_at_5 value: 15.193000000000001 - type: recall_at_1 value: 31.277 - type: recall_at_10 value: 60.475 - type: recall_at_100 value: 82.675 - type: recall_at_1000 value: 95.298 - type: recall_at_3 value: 44.388 - type: recall_at_5 value: 52.242999999999995 - type: map_at_1 value: 25.593 - type: map_at_10 value: 35.089999999999996 - type: map_at_100 value: 36.269 - type: map_at_1000 value: 36.419000000000004 - type: map_at_3 value: 32.449 - type: map_at_5 value: 33.952 - type: mrr_at_1 value: 32.484 - type: mrr_at_10 value: 40.725 - type: mrr_at_100 value: 41.465999999999994 - type: mrr_at_1000 value: 41.521 - type: mrr_at_3 value: 38.757999999999996 - type: mrr_at_5 value: 39.869 - type: ndcg_at_1 value: 32.484 - type: ndcg_at_10 value: 40.384 - type: ndcg_at_100 value: 44.984 - type: ndcg_at_1000 value: 47.528 - type: ndcg_at_3 value: 36.77 - type: ndcg_at_5 value: 38.505 - type: precision_at_1 value: 32.484 - type: precision_at_10 value: 7.866 - type: precision_at_100 value: 1.2959999999999998 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 18.195 - type: precision_at_5 value: 13.032 - type: recall_at_1 value: 25.593 - type: recall_at_10 value: 49.289 - type: recall_at_100 value: 69.84700000000001 - type: recall_at_1000 value: 86.329 - type: recall_at_3 value: 38.51 - type: recall_at_5 value: 43.349 - type: map_at_1 value: 35.116 - type: map_at_10 value: 45.908 - type: map_at_100 value: 46.979 - type: map_at_1000 value: 47.046 - type: map_at_3 value: 42.724000000000004 - type: map_at_5 value: 44.507999999999996 - type: mrr_at_1 value: 40.313 - type: mrr_at_10 value: 49.195 - type: mrr_at_100 value: 49.996 - type: mrr_at_1000 value: 50.03300000000001 - type: mrr_at_3 value: 46.708 - type: mrr_at_5 value: 48.187999999999995 - type: ndcg_at_1 value: 40.313 - type: ndcg_at_10 value: 51.43600000000001 - type: ndcg_at_100 value: 55.873 - type: ndcg_at_1000 value: 57.288 - type: ndcg_at_3 value: 46.038000000000004 - type: ndcg_at_5 value: 48.729 - type: precision_at_1 value: 40.313 - type: precision_at_10 value: 8.382000000000001 - type: precision_at_100 value: 1.145 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 20.480999999999998 - type: precision_at_5 value: 14.219000000000001 - type: recall_at_1 value: 35.116 - type: recall_at_10 value: 64.524 - type: recall_at_100 value: 83.859 - type: recall_at_1000 value: 93.977 - type: recall_at_3 value: 50.102999999999994 - type: recall_at_5 value: 56.818000000000005 - type: map_at_1 value: 18.488 - type: map_at_10 value: 25.667 - type: map_at_100 value: 26.541999999999998 - type: map_at_1000 value: 26.637 - type: map_at_3 value: 23.483 - type: map_at_5 value: 24.667 - type: mrr_at_1 value: 20.0 - type: mrr_at_10 value: 27.178 - type: mrr_at_100 value: 27.989000000000004 - type: mrr_at_1000 value: 28.07 - type: mrr_at_3 value: 25.122 - type: mrr_at_5 value: 26.275 - type: ndcg_at_1 value: 20.0 - type: ndcg_at_10 value: 29.736 - type: ndcg_at_100 value: 34.358 - type: ndcg_at_1000 value: 37.036 - type: ndcg_at_3 value: 25.405 - type: ndcg_at_5 value: 27.441 - type: precision_at_1 value: 20.0 - type: precision_at_10 value: 4.712000000000001 - type: precision_at_100 value: 0.751 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 10.885 - type: precision_at_5 value: 7.706 - type: recall_at_1 value: 18.488 - type: recall_at_10 value: 40.83 - type: recall_at_100 value: 62.707 - type: recall_at_1000 value: 83.41199999999999 - type: recall_at_3 value: 29.21 - type: recall_at_5 value: 34.009 - type: map_at_1 value: 9.532 - type: map_at_10 value: 15.193000000000001 - type: map_at_100 value: 16.381 - type: map_at_1000 value: 16.524 - type: map_at_3 value: 13.386000000000001 - type: map_at_5 value: 14.261 - type: mrr_at_1 value: 11.940000000000001 - type: mrr_at_10 value: 18.285 - type: mrr_at_100 value: 19.373 - type: mrr_at_1000 value: 19.467000000000002 - type: mrr_at_3 value: 16.252 - type: mrr_at_5 value: 17.26 - type: ndcg_at_1 value: 11.940000000000001 - type: ndcg_at_10 value: 19.095000000000002 - type: ndcg_at_100 value: 25.214 - type: ndcg_at_1000 value: 28.619 - type: ndcg_at_3 value: 15.482000000000001 - type: ndcg_at_5 value: 16.892 - type: precision_at_1 value: 11.940000000000001 - type: precision_at_10 value: 3.744 - type: precision_at_100 value: 0.815 - type: precision_at_1000 value: 0.124 - type: precision_at_3 value: 7.710999999999999 - type: precision_at_5 value: 5.647 - type: recall_at_1 value: 9.532 - type: recall_at_10 value: 28.026 - type: recall_at_100 value: 55.253 - type: recall_at_1000 value: 79.86999999999999 - type: recall_at_3 value: 18.084 - type: recall_at_5 value: 21.553 - type: map_at_1 value: 23.416 - type: map_at_10 value: 32.649 - type: map_at_100 value: 33.983000000000004 - type: map_at_1000 value: 34.107 - type: map_at_3 value: 29.254 - type: map_at_5 value: 31.339 - type: mrr_at_1 value: 28.778 - type: mrr_at_10 value: 37.513999999999996 - type: mrr_at_100 value: 38.458999999999996 - type: mrr_at_1000 value: 38.517 - type: mrr_at_3 value: 34.585 - type: mrr_at_5 value: 36.514 - type: ndcg_at_1 value: 28.778 - type: ndcg_at_10 value: 38.233 - type: ndcg_at_100 value: 44.14 - type: ndcg_at_1000 value: 46.583000000000006 - type: ndcg_at_3 value: 32.718 - type: ndcg_at_5 value: 35.778999999999996 - type: precision_at_1 value: 28.778 - type: precision_at_10 value: 7.2090000000000005 - type: precision_at_100 value: 1.194 - type: precision_at_1000 value: 0.16 - type: precision_at_3 value: 15.495999999999999 - type: precision_at_5 value: 11.781 - type: recall_at_1 value: 23.416 - type: recall_at_10 value: 50.063 - type: recall_at_100 value: 75.4 - type: recall_at_1000 value: 91.74799999999999 - type: recall_at_3 value: 35.113 - type: recall_at_5 value: 42.620999999999995 - type: map_at_1 value: 18.891 - type: map_at_10 value: 28.000000000000004 - type: map_at_100 value: 29.354999999999997 - type: map_at_1000 value: 29.453000000000003 - type: map_at_3 value: 24.551000000000002 - type: map_at_5 value: 26.383000000000003 - type: mrr_at_1 value: 23.402 - type: mrr_at_10 value: 32.308 - type: mrr_at_100 value: 33.242 - type: mrr_at_1000 value: 33.294000000000004 - type: mrr_at_3 value: 29.262 - type: mrr_at_5 value: 30.997000000000003 - type: ndcg_at_1 value: 23.402 - type: ndcg_at_10 value: 33.932 - type: ndcg_at_100 value: 39.925 - type: ndcg_at_1000 value: 42.126999999999995 - type: ndcg_at_3 value: 27.816999999999997 - type: ndcg_at_5 value: 30.554 - type: precision_at_1 value: 23.402 - type: precision_at_10 value: 6.747 - type: precision_at_100 value: 1.147 - type: precision_at_1000 value: 0.15 - type: precision_at_3 value: 13.469999999999999 - type: precision_at_5 value: 10.32 - type: recall_at_1 value: 18.891 - type: recall_at_10 value: 47.58 - type: recall_at_100 value: 73.668 - type: recall_at_1000 value: 88.77000000000001 - type: recall_at_3 value: 30.726 - type: recall_at_5 value: 37.547000000000004 - type: map_at_1 value: 20.303499999999996 - type: map_at_10 value: 28.263499999999997 - type: map_at_100 value: 29.431250000000002 - type: map_at_1000 value: 29.555166666666665 - type: map_at_3 value: 25.59133333333333 - type: map_at_5 value: 27.091500000000003 - type: mrr_at_1 value: 24.19725 - type: mrr_at_10 value: 31.803750000000004 - type: mrr_at_100 value: 32.691916666666664 - type: mrr_at_1000 value: 32.760083333333334 - type: mrr_at_3 value: 29.447749999999996 - type: mrr_at_5 value: 30.79858333333334 - type: ndcg_at_1 value: 24.19725 - type: ndcg_at_10 value: 33.11925000000001 - type: ndcg_at_100 value: 38.384916666666655 - type: ndcg_at_1000 value: 40.991499999999995 - type: ndcg_at_3 value: 28.5115 - type: ndcg_at_5 value: 30.718833333333333 - type: precision_at_1 value: 24.19725 - type: precision_at_10 value: 6.061666666666666 - type: precision_at_100 value: 1.0404166666666665 - type: precision_at_1000 value: 0.14583333333333337 - type: precision_at_3 value: 13.347083333333334 - type: precision_at_5 value: 9.747916666666667 - type: recall_at_1 value: 20.303499999999996 - type: recall_at_10 value: 43.93183333333334 - type: recall_at_100 value: 67.47800000000001 - type: recall_at_1000 value: 85.91425000000001 - type: recall_at_3 value: 31.160083333333333 - type: recall_at_5 value: 36.76633333333333 - type: map_at_1 value: 12.666 - type: map_at_10 value: 18.448999999999998 - type: map_at_100 value: 19.448 - type: map_at_1000 value: 19.54 - type: map_at_3 value: 16.581000000000003 - type: map_at_5 value: 17.485999999999997 - type: mrr_at_1 value: 14.11 - type: mrr_at_10 value: 19.796 - type: mrr_at_100 value: 20.785999999999998 - type: mrr_at_1000 value: 20.861 - type: mrr_at_3 value: 18.175 - type: mrr_at_5 value: 18.926000000000002 - type: ndcg_at_1 value: 14.11 - type: ndcg_at_10 value: 21.83 - type: ndcg_at_100 value: 27.017999999999997 - type: ndcg_at_1000 value: 29.520999999999997 - type: ndcg_at_3 value: 18.358 - type: ndcg_at_5 value: 19.719 - type: precision_at_1 value: 14.11 - type: precision_at_10 value: 3.819 - type: precision_at_100 value: 0.701 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 8.384 - type: precision_at_5 value: 5.92 - type: recall_at_1 value: 12.666 - type: recall_at_10 value: 30.746000000000002 - type: recall_at_100 value: 54.675 - type: recall_at_1000 value: 73.57900000000001 - type: recall_at_3 value: 21.196 - type: recall_at_5 value: 24.552 - type: map_at_1 value: 12.53 - type: map_at_10 value: 17.881 - type: map_at_100 value: 18.923000000000002 - type: map_at_1000 value: 19.049 - type: map_at_3 value: 16.088 - type: map_at_5 value: 17.0 - type: mrr_at_1 value: 15.244 - type: mrr_at_10 value: 20.906 - type: mrr_at_100 value: 21.83 - type: mrr_at_1000 value: 21.913 - type: mrr_at_3 value: 19.104 - type: mrr_at_5 value: 19.994999999999997 - type: ndcg_at_1 value: 15.244 - type: ndcg_at_10 value: 21.541 - type: ndcg_at_100 value: 26.799 - type: ndcg_at_1000 value: 29.927 - type: ndcg_at_3 value: 18.208 - type: ndcg_at_5 value: 19.573999999999998 - type: precision_at_1 value: 15.244 - type: precision_at_10 value: 4.04 - type: precision_at_100 value: 0.808 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 8.672 - type: precision_at_5 value: 6.283999999999999 - type: recall_at_1 value: 12.53 - type: recall_at_10 value: 29.601 - type: recall_at_100 value: 53.615 - type: recall_at_1000 value: 76.344 - type: recall_at_3 value: 20.159 - type: recall_at_5 value: 23.746000000000002 - type: map_at_1 value: 21.849 - type: map_at_10 value: 28.937 - type: map_at_100 value: 30.003999999999998 - type: map_at_1000 value: 30.122 - type: map_at_3 value: 26.150000000000002 - type: map_at_5 value: 27.744000000000003 - type: mrr_at_1 value: 25.093 - type: mrr_at_10 value: 32.143 - type: mrr_at_100 value: 33.053 - type: mrr_at_1000 value: 33.134 - type: mrr_at_3 value: 29.586000000000002 - type: mrr_at_5 value: 31.116 - type: ndcg_at_1 value: 25.093 - type: ndcg_at_10 value: 33.631 - type: ndcg_at_100 value: 38.893 - type: ndcg_at_1000 value: 41.692 - type: ndcg_at_3 value: 28.497 - type: ndcg_at_5 value: 31.028 - type: precision_at_1 value: 25.093 - type: precision_at_10 value: 5.765 - type: precision_at_100 value: 0.947 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 12.623999999999999 - type: precision_at_5 value: 9.347 - type: recall_at_1 value: 21.849 - type: recall_at_10 value: 44.767 - type: recall_at_100 value: 68.298 - type: recall_at_1000 value: 88.107 - type: recall_at_3 value: 30.968 - type: recall_at_5 value: 37.19 - type: map_at_1 value: 18.409 - type: map_at_10 value: 27.750999999999998 - type: map_at_100 value: 29.241 - type: map_at_1000 value: 29.467 - type: map_at_3 value: 24.29 - type: map_at_5 value: 26.448 - type: mrr_at_1 value: 22.53 - type: mrr_at_10 value: 31.887999999999998 - type: mrr_at_100 value: 32.89 - type: mrr_at_1000 value: 32.956 - type: mrr_at_3 value: 28.854000000000003 - type: mrr_at_5 value: 30.751 - type: ndcg_at_1 value: 22.53 - type: ndcg_at_10 value: 33.827 - type: ndcg_at_100 value: 39.749 - type: ndcg_at_1000 value: 42.677 - type: ndcg_at_3 value: 28.101 - type: ndcg_at_5 value: 31.380999999999997 - type: precision_at_1 value: 22.53 - type: precision_at_10 value: 6.976 - type: precision_at_100 value: 1.443 - type: precision_at_1000 value: 0.23700000000000002 - type: precision_at_3 value: 13.966000000000001 - type: precision_at_5 value: 10.909 - type: recall_at_1 value: 18.409 - type: recall_at_10 value: 46.217000000000006 - type: recall_at_100 value: 72.882 - type: recall_at_1000 value: 91.625 - type: recall_at_3 value: 30.64 - type: recall_at_5 value: 38.948 - type: map_at_1 value: 15.875 - type: map_at_10 value: 21.484 - type: map_at_100 value: 22.367 - type: map_at_1000 value: 22.481 - type: map_at_3 value: 19.686 - type: map_at_5 value: 20.589 - type: mrr_at_1 value: 17.560000000000002 - type: mrr_at_10 value: 23.474999999999998 - type: mrr_at_100 value: 24.331 - type: mrr_at_1000 value: 24.426000000000002 - type: mrr_at_3 value: 21.688 - type: mrr_at_5 value: 22.603 - type: ndcg_at_1 value: 17.560000000000002 - type: ndcg_at_10 value: 25.268 - type: ndcg_at_100 value: 29.869 - type: ndcg_at_1000 value: 33.145 - type: ndcg_at_3 value: 21.622 - type: ndcg_at_5 value: 23.155 - type: precision_at_1 value: 17.560000000000002 - type: precision_at_10 value: 4.067 - type: precision_at_100 value: 0.6709999999999999 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 9.489 - type: precision_at_5 value: 6.617000000000001 - type: recall_at_1 value: 15.875 - type: recall_at_10 value: 35.064 - type: recall_at_100 value: 56.857 - type: recall_at_1000 value: 81.91199999999999 - type: recall_at_3 value: 24.823999999999998 - type: recall_at_5 value: 28.62 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.637 - type: map_at_10 value: 18.401999999999997 - type: map_at_100 value: 20.121 - type: map_at_1000 value: 20.305999999999997 - type: map_at_3 value: 15.348 - type: map_at_5 value: 16.841 - type: mrr_at_1 value: 23.909 - type: mrr_at_10 value: 34.512 - type: mrr_at_100 value: 35.485 - type: mrr_at_1000 value: 35.528999999999996 - type: mrr_at_3 value: 31.368000000000002 - type: mrr_at_5 value: 33.137 - type: ndcg_at_1 value: 23.909 - type: ndcg_at_10 value: 25.94 - type: ndcg_at_100 value: 33.116 - type: ndcg_at_1000 value: 36.502 - type: ndcg_at_3 value: 21.046 - type: ndcg_at_5 value: 22.715 - type: precision_at_1 value: 23.909 - type: precision_at_10 value: 8.195 - type: precision_at_100 value: 1.593 - type: precision_at_1000 value: 0.22200000000000003 - type: precision_at_3 value: 15.744 - type: precision_at_5 value: 12.142999999999999 - type: recall_at_1 value: 10.637 - type: recall_at_10 value: 31.251 - type: recall_at_100 value: 56.477999999999994 - type: recall_at_1000 value: 75.52600000000001 - type: recall_at_3 value: 19.482 - type: recall_at_5 value: 24.145 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 7.786999999999999 - type: map_at_10 value: 16.182 - type: map_at_100 value: 22.698 - type: map_at_1000 value: 24.192 - type: map_at_3 value: 11.84 - type: map_at_5 value: 13.602 - type: mrr_at_1 value: 56.99999999999999 - type: mrr_at_10 value: 66.702 - type: mrr_at_100 value: 67.291 - type: mrr_at_1000 value: 67.301 - type: mrr_at_3 value: 64.708 - type: mrr_at_5 value: 65.946 - type: ndcg_at_1 value: 46.75 - type: ndcg_at_10 value: 35.469 - type: ndcg_at_100 value: 40.077 - type: ndcg_at_1000 value: 47.252 - type: ndcg_at_3 value: 39.096 - type: ndcg_at_5 value: 36.766 - type: precision_at_1 value: 56.99999999999999 - type: precision_at_10 value: 28.175 - type: precision_at_100 value: 9.423 - type: precision_at_1000 value: 2.017 - type: precision_at_3 value: 41.667 - type: precision_at_5 value: 35.199999999999996 - type: recall_at_1 value: 7.786999999999999 - type: recall_at_10 value: 21.428 - type: recall_at_100 value: 45.86 - type: recall_at_1000 value: 68.83 - type: recall_at_3 value: 12.992 - type: recall_at_5 value: 16.091 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 45.985 - type: f1 value: 39.52034839578244 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 39.141999999999996 - type: map_at_10 value: 50.255 - type: map_at_100 value: 50.938 - type: map_at_1000 value: 50.975 - type: map_at_3 value: 47.4 - type: map_at_5 value: 49.172 - type: mrr_at_1 value: 41.794 - type: mrr_at_10 value: 53.198 - type: mrr_at_100 value: 53.82900000000001 - type: mrr_at_1000 value: 53.857 - type: mrr_at_3 value: 50.32 - type: mrr_at_5 value: 52.105999999999995 - type: ndcg_at_1 value: 41.794 - type: ndcg_at_10 value: 56.411 - type: ndcg_at_100 value: 59.663 - type: ndcg_at_1000 value: 60.590999999999994 - type: ndcg_at_3 value: 50.73 - type: ndcg_at_5 value: 53.823 - type: precision_at_1 value: 41.794 - type: precision_at_10 value: 7.9159999999999995 - type: precision_at_100 value: 0.968 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 20.627000000000002 - type: precision_at_5 value: 14.038 - type: recall_at_1 value: 39.141999999999996 - type: recall_at_10 value: 72.695 - type: recall_at_100 value: 87.44800000000001 - type: recall_at_1000 value: 94.313 - type: recall_at_3 value: 57.415000000000006 - type: recall_at_5 value: 64.851 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 18.715 - type: map_at_10 value: 30.253999999999998 - type: map_at_100 value: 32.123000000000005 - type: map_at_1000 value: 32.303 - type: map_at_3 value: 26.203 - type: map_at_5 value: 28.585 - type: mrr_at_1 value: 36.42 - type: mrr_at_10 value: 45.456 - type: mrr_at_100 value: 46.314 - type: mrr_at_1000 value: 46.356 - type: mrr_at_3 value: 42.798 - type: mrr_at_5 value: 44.365 - type: ndcg_at_1 value: 36.42 - type: ndcg_at_10 value: 37.747 - type: ndcg_at_100 value: 44.714999999999996 - type: ndcg_at_1000 value: 47.866 - type: ndcg_at_3 value: 34.166999999999994 - type: ndcg_at_5 value: 35.54 - type: precision_at_1 value: 36.42 - type: precision_at_10 value: 10.602 - type: precision_at_100 value: 1.773 - type: precision_at_1000 value: 0.234 - type: precision_at_3 value: 22.84 - type: precision_at_5 value: 17.315 - type: recall_at_1 value: 18.715 - type: recall_at_10 value: 44.199 - type: recall_at_100 value: 70.097 - type: recall_at_1000 value: 89.13600000000001 - type: recall_at_3 value: 30.543 - type: recall_at_5 value: 36.705 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 30.608 - type: map_at_10 value: 45.829 - type: map_at_100 value: 46.786 - type: map_at_1000 value: 46.869 - type: map_at_3 value: 42.834 - type: map_at_5 value: 44.566 - type: mrr_at_1 value: 61.214999999999996 - type: mrr_at_10 value: 69.072 - type: mrr_at_100 value: 69.492 - type: mrr_at_1000 value: 69.512 - type: mrr_at_3 value: 67.553 - type: mrr_at_5 value: 68.446 - type: ndcg_at_1 value: 61.214999999999996 - type: ndcg_at_10 value: 54.66 - type: ndcg_at_100 value: 58.342000000000006 - type: ndcg_at_1000 value: 60.101000000000006 - type: ndcg_at_3 value: 49.932 - type: ndcg_at_5 value: 52.342999999999996 - type: precision_at_1 value: 61.214999999999996 - type: precision_at_10 value: 11.65 - type: precision_at_100 value: 1.4529999999999998 - type: precision_at_1000 value: 0.169 - type: precision_at_3 value: 31.78 - type: precision_at_5 value: 20.979999999999997 - type: recall_at_1 value: 30.608 - type: recall_at_10 value: 58.251 - type: recall_at_100 value: 72.667 - type: recall_at_1000 value: 84.396 - type: recall_at_3 value: 47.67 - type: recall_at_5 value: 52.451 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.21999999999998 - type: ap value: 85.88889163834975 - type: f1 value: 90.20542534971861 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 19.785 - type: map_at_10 value: 31.596000000000004 - type: map_at_100 value: 32.849000000000004 - type: map_at_1000 value: 32.903999999999996 - type: map_at_3 value: 27.772000000000002 - type: map_at_5 value: 29.952 - type: mrr_at_1 value: 20.344 - type: mrr_at_10 value: 32.146 - type: mrr_at_100 value: 33.349000000000004 - type: mrr_at_1000 value: 33.396 - type: mrr_at_3 value: 28.403 - type: mrr_at_5 value: 30.542 - type: ndcg_at_1 value: 20.358 - type: ndcg_at_10 value: 38.288 - type: ndcg_at_100 value: 44.383 - type: ndcg_at_1000 value: 45.714 - type: ndcg_at_3 value: 30.525999999999996 - type: ndcg_at_5 value: 34.393 - type: precision_at_1 value: 20.358 - type: precision_at_10 value: 6.16 - type: precision_at_100 value: 0.9209999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.08 - type: precision_at_5 value: 9.799 - type: recall_at_1 value: 19.785 - type: recall_at_10 value: 58.916000000000004 - type: recall_at_100 value: 87.24 - type: recall_at_1000 value: 97.37599999999999 - type: recall_at_3 value: 37.872 - type: recall_at_5 value: 47.116 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.63429092567262 - type: f1 value: 88.58612904162257 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 58.080255357957135 - type: f1 value: 39.561402859935 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.03026227303296 - type: f1 value: 61.10334739098155 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.05245460659046 - type: f1 value: 69.96280851244295 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.9762359299763 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.670044418802444 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 29.32330726926572 - type: mrr value: 30.16727607430052 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.552 - type: map_at_10 value: 10.692 - type: map_at_100 value: 13.835 - type: map_at_1000 value: 15.305 - type: map_at_3 value: 7.5009999999999994 - type: map_at_5 value: 8.988 - type: mrr_at_1 value: 39.318999999999996 - type: mrr_at_10 value: 48.809000000000005 - type: mrr_at_100 value: 49.382 - type: mrr_at_1000 value: 49.442 - type: mrr_at_3 value: 46.078 - type: mrr_at_5 value: 48.091 - type: ndcg_at_1 value: 37.152 - type: ndcg_at_10 value: 30.159000000000002 - type: ndcg_at_100 value: 28.371000000000002 - type: ndcg_at_1000 value: 37.632 - type: ndcg_at_3 value: 34.662 - type: ndcg_at_5 value: 32.814 - type: precision_at_1 value: 38.7 - type: precision_at_10 value: 23.034 - type: precision_at_100 value: 7.588 - type: precision_at_1000 value: 2.0709999999999997 - type: precision_at_3 value: 33.024 - type: precision_at_5 value: 29.164 - type: recall_at_1 value: 4.552 - type: recall_at_10 value: 14.827000000000002 - type: recall_at_100 value: 29.256 - type: recall_at_1000 value: 61.739 - type: recall_at_3 value: 8.38 - type: recall_at_5 value: 11.123 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 25.424999999999997 - type: map_at_10 value: 39.972 - type: map_at_100 value: 41.163 - type: map_at_1000 value: 41.202 - type: map_at_3 value: 35.546 - type: map_at_5 value: 38.146 - type: mrr_at_1 value: 28.794999999999998 - type: mrr_at_10 value: 42.315999999999995 - type: mrr_at_100 value: 43.253 - type: mrr_at_1000 value: 43.282 - type: mrr_at_3 value: 38.649 - type: mrr_at_5 value: 40.858 - type: ndcg_at_1 value: 28.766000000000002 - type: ndcg_at_10 value: 47.614000000000004 - type: ndcg_at_100 value: 52.676 - type: ndcg_at_1000 value: 53.574 - type: ndcg_at_3 value: 39.292 - type: ndcg_at_5 value: 43.633 - type: precision_at_1 value: 28.766000000000002 - type: precision_at_10 value: 8.201 - type: precision_at_100 value: 1.099 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 18.201999999999998 - type: precision_at_5 value: 13.447000000000001 - type: recall_at_1 value: 25.424999999999997 - type: recall_at_10 value: 68.586 - type: recall_at_100 value: 90.556 - type: recall_at_1000 value: 97.197 - type: recall_at_3 value: 47.033 - type: recall_at_5 value: 57.044 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.054 - type: map_at_10 value: 83.991 - type: map_at_100 value: 84.63000000000001 - type: map_at_1000 value: 84.648 - type: map_at_3 value: 80.982 - type: map_at_5 value: 82.857 - type: mrr_at_1 value: 80.76 - type: mrr_at_10 value: 87.079 - type: mrr_at_100 value: 87.185 - type: mrr_at_1000 value: 87.18599999999999 - type: mrr_at_3 value: 86.03 - type: mrr_at_5 value: 86.771 - type: ndcg_at_1 value: 80.75 - type: ndcg_at_10 value: 87.85300000000001 - type: ndcg_at_100 value: 89.105 - type: ndcg_at_1000 value: 89.213 - type: ndcg_at_3 value: 84.87400000000001 - type: ndcg_at_5 value: 86.51299999999999 - type: precision_at_1 value: 80.75 - type: precision_at_10 value: 13.352 - type: precision_at_100 value: 1.528 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.113 - type: precision_at_5 value: 24.424 - type: recall_at_1 value: 70.054 - type: recall_at_10 value: 95.209 - type: recall_at_100 value: 99.497 - type: recall_at_1000 value: 99.973 - type: recall_at_3 value: 86.654 - type: recall_at_5 value: 91.313 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 42.71909082787674 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 56.92567540870805 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 2.225 - type: map_at_10 value: 5.785 - type: map_at_100 value: 7.6240000000000006 - type: map_at_1000 value: 8.094999999999999 - type: map_at_3 value: 3.882 - type: map_at_5 value: 4.715 - type: mrr_at_1 value: 11.0 - type: mrr_at_10 value: 18.049 - type: mrr_at_100 value: 19.475 - type: mrr_at_1000 value: 19.599 - type: mrr_at_3 value: 15.082999999999998 - type: mrr_at_5 value: 16.583000000000002 - type: ndcg_at_1 value: 11.0 - type: ndcg_at_10 value: 10.59 - type: ndcg_at_100 value: 18.68 - type: ndcg_at_1000 value: 27.327 - type: ndcg_at_3 value: 8.932 - type: ndcg_at_5 value: 8.126 - type: precision_at_1 value: 11.0 - type: precision_at_10 value: 5.89 - type: precision_at_100 value: 1.778 - type: precision_at_1000 value: 0.385 - type: precision_at_3 value: 8.333 - type: precision_at_5 value: 7.3 - type: recall_at_1 value: 2.225 - type: recall_at_10 value: 11.948 - type: recall_at_100 value: 36.097 - type: recall_at_1000 value: 78.145 - type: recall_at_3 value: 5.078 - type: recall_at_5 value: 7.4079999999999995 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.87898494199837 - type: cos_sim_spearman value: 79.3815141247343 - type: euclidean_pearson value: 80.984944764735 - type: euclidean_spearman value: 79.37984688714191 - type: manhattan_pearson value: 80.96139326762788 - type: manhattan_spearman value: 79.34882764221987 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 82.94123934276303 - type: cos_sim_spearman value: 73.64821774752144 - type: euclidean_pearson value: 79.09149672589201 - type: euclidean_spearman value: 73.64174833442063 - type: manhattan_pearson value: 79.05135129686983 - type: manhattan_spearman value: 73.57858840270084 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 71.37047316191514 - type: cos_sim_spearman value: 75.56797051373606 - type: euclidean_pearson value: 74.59038333631109 - type: euclidean_spearman value: 75.55966023907652 - type: manhattan_pearson value: 74.56600039917967 - type: manhattan_spearman value: 75.52139454559969 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 71.75410054949431 - type: cos_sim_spearman value: 72.09826786050286 - type: euclidean_pearson value: 72.30015801748517 - type: euclidean_spearman value: 72.09347126863909 - type: manhattan_pearson value: 72.2692656804079 - type: manhattan_spearman value: 72.07403601010577 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 83.09663528706463 - type: cos_sim_spearman value: 85.6296813586495 - type: euclidean_pearson value: 84.14347920777777 - type: euclidean_spearman value: 85.62948425849926 - type: manhattan_pearson value: 84.08840896634038 - type: manhattan_spearman value: 85.56264430897471 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 78.55984417539631 - type: cos_sim_spearman value: 82.06700938579174 - type: euclidean_pearson value: 80.92277218507344 - type: euclidean_spearman value: 82.06297899287695 - type: manhattan_pearson value: 80.89292734584946 - type: manhattan_spearman value: 82.01121177547141 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.70738419575085 - type: cos_sim_spearman value: 88.99910283221313 - type: euclidean_pearson value: 88.91458218447116 - type: euclidean_spearman value: 88.97188755639708 - type: manhattan_pearson value: 88.93397958768632 - type: manhattan_spearman value: 89.0514960821245 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 65.30101408630514 - type: cos_sim_spearman value: 66.15672143838582 - type: euclidean_pearson value: 66.61257552376895 - type: euclidean_spearman value: 66.00319920690566 - type: manhattan_pearson value: 66.81435622246758 - type: manhattan_spearman value: 66.35221377631379 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 81.94191078286725 - type: cos_sim_spearman value: 83.69085688689903 - type: euclidean_pearson value: 83.28942607749994 - type: euclidean_spearman value: 83.69370814043747 - type: manhattan_pearson value: 83.3553242227074 - type: manhattan_spearman value: 83.74306572840383 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 88.02503921524934 - type: mrr value: 96.47891777793738 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 51.24999999999999 - type: map_at_10 value: 61.472 - type: map_at_100 value: 62.132 - type: map_at_1000 value: 62.161 - type: map_at_3 value: 58.18299999999999 - type: map_at_5 value: 60.246 - type: mrr_at_1 value: 54.0 - type: mrr_at_10 value: 62.395 - type: mrr_at_100 value: 62.936 - type: mrr_at_1000 value: 62.965 - type: mrr_at_3 value: 59.833000000000006 - type: mrr_at_5 value: 61.5 - type: ndcg_at_1 value: 54.0 - type: ndcg_at_10 value: 66.235 - type: ndcg_at_100 value: 69.279 - type: ndcg_at_1000 value: 70.044 - type: ndcg_at_3 value: 60.679 - type: ndcg_at_5 value: 63.80200000000001 - type: precision_at_1 value: 54.0 - type: precision_at_10 value: 9.167 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 24.111 - type: precision_at_5 value: 16.333000000000002 - type: recall_at_1 value: 51.24999999999999 - type: recall_at_10 value: 79.833 - type: recall_at_100 value: 94.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 65.267 - type: recall_at_5 value: 72.956 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.62673267326733 - type: cos_sim_ap value: 87.07482534376774 - type: cos_sim_f1 value: 80.63687724704674 - type: cos_sim_precision value: 82.89334741288279 - type: cos_sim_recall value: 78.5 - type: dot_accuracy value: 99.63564356435643 - type: dot_ap value: 86.98432756163903 - type: dot_f1 value: 80.91286307053943 - type: dot_precision value: 84.05172413793103 - type: dot_recall value: 78.0 - type: euclidean_accuracy value: 99.62673267326733 - type: euclidean_ap value: 87.0756316041764 - type: euclidean_f1 value: 80.53553038105046 - type: euclidean_precision value: 83.01486199575372 - type: euclidean_recall value: 78.2 - type: manhattan_accuracy value: 99.62574257425743 - type: manhattan_ap value: 87.05953308523233 - type: manhattan_f1 value: 80.50632911392405 - type: manhattan_precision value: 81.53846153846153 - type: manhattan_recall value: 79.5 - type: max_accuracy value: 99.63564356435643 - type: max_ap value: 87.0756316041764 - type: max_f1 value: 80.91286307053943 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 53.59692640735744 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.86771187657918 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 45.705711066037644 - type: mrr value: 46.25163133435192 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.066382997227624 - type: cos_sim_spearman value: 31.00934876843689 - type: dot_pearson value: 30.419206995727873 - type: dot_spearman value: 31.046571150093747 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.173 - type: map_at_10 value: 1.154 - type: map_at_100 value: 5.8180000000000005 - type: map_at_1000 value: 14.892 - type: map_at_3 value: 0.415 - type: map_at_5 value: 0.641 - type: mrr_at_1 value: 68.0 - type: mrr_at_10 value: 76.869 - type: mrr_at_100 value: 77.264 - type: mrr_at_1000 value: 77.264 - type: mrr_at_3 value: 75.333 - type: mrr_at_5 value: 76.333 - type: ndcg_at_1 value: 62.0 - type: ndcg_at_10 value: 50.81 - type: ndcg_at_100 value: 37.659 - type: ndcg_at_1000 value: 37.444 - type: ndcg_at_3 value: 55.11200000000001 - type: ndcg_at_5 value: 51.858000000000004 - type: precision_at_1 value: 68.0 - type: precision_at_10 value: 54.800000000000004 - type: precision_at_100 value: 38.36 - type: precision_at_1000 value: 16.88 - type: precision_at_3 value: 57.99999999999999 - type: precision_at_5 value: 54.800000000000004 - type: recall_at_1 value: 0.173 - type: recall_at_10 value: 1.435 - type: recall_at_100 value: 9.259 - type: recall_at_1000 value: 36.033 - type: recall_at_3 value: 0.447 - type: recall_at_5 value: 0.74 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.228 - type: map_at_10 value: 4.633 - type: map_at_100 value: 9.171 - type: map_at_1000 value: 10.58 - type: map_at_3 value: 2.413 - type: map_at_5 value: 3.3640000000000003 - type: mrr_at_1 value: 16.326999999999998 - type: mrr_at_10 value: 27.071 - type: mrr_at_100 value: 28.454 - type: mrr_at_1000 value: 28.475 - type: mrr_at_3 value: 19.048000000000002 - type: mrr_at_5 value: 24.354 - type: ndcg_at_1 value: 14.285999999999998 - type: ndcg_at_10 value: 13.312 - type: ndcg_at_100 value: 25.587 - type: ndcg_at_1000 value: 37.879000000000005 - type: ndcg_at_3 value: 11.591 - type: ndcg_at_5 value: 12.536 - type: precision_at_1 value: 16.326999999999998 - type: precision_at_10 value: 13.264999999999999 - type: precision_at_100 value: 6.061 - type: precision_at_1000 value: 1.4040000000000001 - type: precision_at_3 value: 12.245000000000001 - type: precision_at_5 value: 13.877999999999998 - type: recall_at_1 value: 1.228 - type: recall_at_10 value: 9.759 - type: recall_at_100 value: 38.809 - type: recall_at_1000 value: 76.229 - type: recall_at_3 value: 2.738 - type: recall_at_5 value: 5.510000000000001 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.33179999999999 - type: ap value: 14.379598043710034 - type: f1 value: 53.89665138084001 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 58.245614035087726 - type: f1 value: 58.3152945231724 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.38161204174159 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 82.60118018716099 - type: cos_sim_ap value: 62.5064927795416 - type: cos_sim_f1 value: 59.50177935943061 - type: cos_sim_precision value: 54.05172413793103 - type: cos_sim_recall value: 66.17414248021109 - type: dot_accuracy value: 82.52369315133814 - type: dot_ap value: 62.36545569178682 - type: dot_f1 value: 59.5539204414808 - type: dot_precision value: 52.77098614506927 - type: dot_recall value: 68.33773087071239 - type: euclidean_accuracy value: 82.62502235202956 - type: euclidean_ap value: 62.51708062651598 - type: euclidean_f1 value: 59.48887837198297 - type: euclidean_precision value: 53.925353925353924 - type: euclidean_recall value: 66.33245382585751 - type: manhattan_accuracy value: 82.57733802229242 - type: manhattan_ap value: 62.4034159268756 - type: manhattan_f1 value: 59.42896615242921 - type: manhattan_precision value: 52.716503267973856 - type: manhattan_recall value: 68.10026385224275 - type: max_accuracy value: 82.62502235202956 - type: max_ap value: 62.51708062651598 - type: max_f1 value: 59.5539204414808 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 86.74079248651377 - type: cos_sim_ap value: 81.11128912769627 - type: cos_sim_f1 value: 73.39903296054331 - type: cos_sim_precision value: 70.49273307337823 - type: cos_sim_recall value: 76.55528179858331 - type: dot_accuracy value: 86.71362595567975 - type: dot_ap value: 81.07587927324371 - type: dot_f1 value: 73.36112443280334 - type: dot_precision value: 70.42283447836249 - type: dot_recall value: 76.55528179858331 - type: euclidean_accuracy value: 86.73109015407304 - type: euclidean_ap value: 81.11249921439843 - type: euclidean_f1 value: 73.39903296054331 - type: euclidean_precision value: 70.49273307337823 - type: euclidean_recall value: 76.55528179858331 - type: manhattan_accuracy value: 86.7252687546086 - type: manhattan_ap value: 81.05990290681223 - type: manhattan_f1 value: 73.29173525245952 - type: manhattan_precision value: 72.88161400837457 - type: manhattan_recall value: 73.70649830612874 - type: max_accuracy value: 86.74079248651377 - type: max_ap value: 81.11249921439843 - type: max_f1 value: 73.39903296054331 ---
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
{"tags": ["mteb"], "model-index": [{"name": "bert_1b3_mixlang_newstep3", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 70.11940298507463}, {"type": "ap", "value": 32.37756187516329}, {"type": "f1", "value": 63.92312669545795}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 92.950675}, {"type": "ap", "value": 89.69186819088316}, {"type": "f1", "value": 92.94108521905532}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 50.522}, {"type": "f1", "value": 48.76020527037862}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 14.865}, {"type": "map_at_10", "value": 26.026}, {"type": "map_at_100", "value": 27.586}, {"type": "map_at_1000", "value": 27.622999999999998}, {"type": "map_at_3", "value": 21.859}, {"type": "map_at_5", "value": 24.049}, {"type": "mrr_at_1", "value": 15.504999999999999}, {"type": "mrr_at_10", "value": 26.265}, {"type": "mrr_at_100", "value": 27.810000000000002}, {"type": "mrr_at_1000", "value": 27.847}, {"type": "mrr_at_3", "value": 22.06}, {"type": "mrr_at_5", "value": 24.247}, {"type": "ndcg_at_1", "value": 14.865}, {"type": "ndcg_at_10", "value": 32.934999999999995}, {"type": "ndcg_at_100", "value": 40.627}, {"type": "ndcg_at_1000", "value": 41.524}, {"type": "ndcg_at_3", "value": 24.153}, {"type": "ndcg_at_5", "value": 28.133999999999997}, {"type": "precision_at_1", "value": 14.865}, {"type": "precision_at_10", "value": 5.541}, {"type": "precision_at_100", "value": 0.9159999999999999}, {"type": "precision_at_1000", "value": 0.099}, {"type": "precision_at_3", "value": 10.266}, {"type": "precision_at_5", "value": 8.108}, {"type": "recall_at_1", "value": 14.865}, {"type": "recall_at_10", "value": 55.405}, {"type": "recall_at_100", "value": 91.607}, {"type": "recall_at_1000", "value": 98.506}, {"type": "recall_at_3", "value": 30.797}, {"type": "recall_at_5", "value": 40.541}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 47.028296913559814}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 38.38123118365735}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 58.9616553564134}, {"type": "mrr", "value": 72.16033504814668}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.00899493452621}, {"type": "cos_sim_spearman", "value": 83.85673000958819}, {"type": "euclidean_pearson", "value": 85.65567511199598}, {"type": "euclidean_spearman", "value": 83.90311660870698}, {"type": "manhattan_pearson", "value": 85.37147829428248}, {"type": "manhattan_spearman", "value": 83.74588411039522}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 75.5909090909091}, {"type": "f1", "value": 74.476632049175}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 38.981180962194216}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 34.9394829907367}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.277}, {"type": "map_at_10", "value": 42.153}, {"type": "map_at_100", "value": 43.683}, {"type": "map_at_1000", "value": 43.817}, {"type": "map_at_3", "value": 38.454}, {"type": "map_at_5", "value": 40.721000000000004}, {"type": "mrr_at_1", "value": 38.913}, {"type": "mrr_at_10", "value": 48.232}, {"type": "mrr_at_100", "value": 48.888}, {"type": "mrr_at_1000", "value": 48.929}, {"type": "mrr_at_3", "value": 45.279}, {"type": "mrr_at_5", "value": 47.089}, {"type": "ndcg_at_1", "value": 38.913}, {"type": "ndcg_at_10", "value": 48.518}, {"type": "ndcg_at_100", "value": 53.797}, {"type": "ndcg_at_1000", "value": 55.754999999999995}, {"type": "ndcg_at_3", "value": 43.122}, {"type": "ndcg_at_5", "value": 45.869}, {"type": "precision_at_1", "value": 38.913}, {"type": "precision_at_10", "value": 9.413}, {"type": "precision_at_100", "value": 1.567}, {"type": "precision_at_1000", "value": 0.20600000000000002}, {"type": "precision_at_3", "value": 20.791999999999998}, {"type": "precision_at_5", "value": 15.193000000000001}, {"type": "recall_at_1", "value": 31.277}, {"type": "recall_at_10", "value": 60.475}, {"type": "recall_at_100", "value": 82.675}, {"type": "recall_at_1000", "value": 95.298}, {"type": "recall_at_3", "value": 44.388}, {"type": "recall_at_5", "value": 52.242999999999995}, {"type": "map_at_1", "value": 25.593}, {"type": "map_at_10", "value": 35.089999999999996}, {"type": "map_at_100", "value": 36.269}, {"type": "map_at_1000", "value": 36.419000000000004}, {"type": "map_at_3", "value": 32.449}, {"type": "map_at_5", "value": 33.952}, {"type": "mrr_at_1", "value": 32.484}, {"type": "mrr_at_10", "value": 40.725}, {"type": "mrr_at_100", "value": 41.465999999999994}, {"type": "mrr_at_1000", "value": 41.521}, {"type": "mrr_at_3", "value": 38.757999999999996}, {"type": "mrr_at_5", "value": 39.869}, {"type": "ndcg_at_1", "value": 32.484}, {"type": "ndcg_at_10", "value": 40.384}, {"type": "ndcg_at_100", "value": 44.984}, {"type": "ndcg_at_1000", "value": 47.528}, {"type": "ndcg_at_3", "value": 36.77}, {"type": "ndcg_at_5", "value": 38.505}, {"type": "precision_at_1", "value": 32.484}, {"type": "precision_at_10", "value": 7.866}, {"type": "precision_at_100", "value": 1.2959999999999998}, {"type": "precision_at_1000", "value": 0.185}, {"type": "precision_at_3", "value": 18.195}, {"type": "precision_at_5", "value": 13.032}, {"type": "recall_at_1", "value": 25.593}, {"type": "recall_at_10", "value": 49.289}, {"type": "recall_at_100", "value": 69.84700000000001}, {"type": "recall_at_1000", "value": 86.329}, {"type": "recall_at_3", "value": 38.51}, {"type": "recall_at_5", "value": 43.349}, {"type": "map_at_1", "value": 35.116}, {"type": "map_at_10", "value": 45.908}, {"type": "map_at_100", "value": 46.979}, {"type": "map_at_1000", "value": 47.046}, {"type": "map_at_3", "value": 42.724000000000004}, {"type": "map_at_5", "value": 44.507999999999996}, {"type": "mrr_at_1", "value": 40.313}, {"type": "mrr_at_10", "value": 49.195}, {"type": "mrr_at_100", "value": 49.996}, {"type": "mrr_at_1000", "value": 50.03300000000001}, {"type": "mrr_at_3", "value": 46.708}, {"type": "mrr_at_5", "value": 48.187999999999995}, {"type": "ndcg_at_1", "value": 40.313}, {"type": "ndcg_at_10", "value": 51.43600000000001}, {"type": "ndcg_at_100", "value": 55.873}, {"type": "ndcg_at_1000", "value": 57.288}, {"type": "ndcg_at_3", "value": 46.038000000000004}, {"type": "ndcg_at_5", "value": 48.729}, {"type": "precision_at_1", "value": 40.313}, {"type": "precision_at_10", "value": 8.382000000000001}, {"type": "precision_at_100", "value": 1.145}, {"type": "precision_at_1000", "value": 0.132}, {"type": "precision_at_3", "value": 20.480999999999998}, {"type": "precision_at_5", "value": 14.219000000000001}, {"type": "recall_at_1", "value": 35.116}, {"type": "recall_at_10", "value": 64.524}, {"type": "recall_at_100", "value": 83.859}, {"type": "recall_at_1000", "value": 93.977}, {"type": "recall_at_3", "value": 50.102999999999994}, {"type": "recall_at_5", "value": 56.818000000000005}, {"type": "map_at_1", "value": 18.488}, {"type": "map_at_10", "value": 25.667}, {"type": "map_at_100", "value": 26.541999999999998}, {"type": "map_at_1000", "value": 26.637}, {"type": "map_at_3", "value": 23.483}, {"type": "map_at_5", "value": 24.667}, {"type": "mrr_at_1", "value": 20.0}, {"type": "mrr_at_10", "value": 27.178}, {"type": "mrr_at_100", "value": 27.989000000000004}, {"type": "mrr_at_1000", "value": 28.07}, {"type": "mrr_at_3", "value": 25.122}, {"type": "mrr_at_5", "value": 26.275}, {"type": "ndcg_at_1", "value": 20.0}, {"type": "ndcg_at_10", "value": 29.736}, {"type": "ndcg_at_100", "value": 34.358}, {"type": "ndcg_at_1000", "value": 37.036}, {"type": "ndcg_at_3", "value": 25.405}, {"type": "ndcg_at_5", "value": 27.441}, {"type": "precision_at_1", "value": 20.0}, {"type": "precision_at_10", "value": 4.712000000000001}, {"type": "precision_at_100", "value": 0.751}, {"type": "precision_at_1000", "value": 0.101}, {"type": "precision_at_3", "value": 10.885}, {"type": "precision_at_5", "value": 7.706}, {"type": "recall_at_1", "value": 18.488}, {"type": "recall_at_10", "value": 40.83}, {"type": "recall_at_100", "value": 62.707}, {"type": "recall_at_1000", "value": 83.41199999999999}, {"type": "recall_at_3", "value": 29.21}, {"type": "recall_at_5", "value": 34.009}, {"type": "map_at_1", "value": 9.532}, {"type": "map_at_10", "value": 15.193000000000001}, {"type": "map_at_100", "value": 16.381}, {"type": "map_at_1000", "value": 16.524}, {"type": "map_at_3", "value": 13.386000000000001}, {"type": "map_at_5", "value": 14.261}, {"type": "mrr_at_1", "value": 11.940000000000001}, {"type": "mrr_at_10", "value": 18.285}, {"type": "mrr_at_100", "value": 19.373}, {"type": "mrr_at_1000", "value": 19.467000000000002}, {"type": "mrr_at_3", "value": 16.252}, {"type": "mrr_at_5", "value": 17.26}, {"type": "ndcg_at_1", "value": 11.940000000000001}, {"type": "ndcg_at_10", "value": 19.095000000000002}, {"type": "ndcg_at_100", "value": 25.214}, {"type": "ndcg_at_1000", "value": 28.619}, {"type": "ndcg_at_3", "value": 15.482000000000001}, {"type": "ndcg_at_5", "value": 16.892}, {"type": "precision_at_1", "value": 11.940000000000001}, {"type": "precision_at_10", "value": 3.744}, {"type": "precision_at_100", "value": 0.815}, {"type": "precision_at_1000", "value": 0.124}, {"type": "precision_at_3", "value": 7.710999999999999}, {"type": "precision_at_5", "value": 5.647}, {"type": "recall_at_1", "value": 9.532}, {"type": "recall_at_10", "value": 28.026}, {"type": "recall_at_100", "value": 55.253}, {"type": "recall_at_1000", "value": 79.86999999999999}, {"type": "recall_at_3", "value": 18.084}, {"type": "recall_at_5", "value": 21.553}, {"type": "map_at_1", "value": 23.416}, {"type": "map_at_10", "value": 32.649}, {"type": "map_at_100", "value": 33.983000000000004}, {"type": "map_at_1000", "value": 34.107}, {"type": "map_at_3", "value": 29.254}, {"type": "map_at_5", "value": 31.339}, {"type": "mrr_at_1", "value": 28.778}, {"type": "mrr_at_10", "value": 37.513999999999996}, {"type": "mrr_at_100", "value": 38.458999999999996}, {"type": "mrr_at_1000", "value": 38.517}, {"type": "mrr_at_3", "value": 34.585}, {"type": "mrr_at_5", "value": 36.514}, {"type": "ndcg_at_1", "value": 28.778}, {"type": "ndcg_at_10", "value": 38.233}, {"type": "ndcg_at_100", "value": 44.14}, {"type": "ndcg_at_1000", "value": 46.583000000000006}, {"type": "ndcg_at_3", "value": 32.718}, {"type": "ndcg_at_5", "value": 35.778999999999996}, {"type": "precision_at_1", "value": 28.778}, {"type": "precision_at_10", "value": 7.2090000000000005}, {"type": "precision_at_100", "value": 1.194}, {"type": "precision_at_1000", "value": 0.16}, {"type": "precision_at_3", "value": 15.495999999999999}, {"type": "precision_at_5", "value": 11.781}, {"type": "recall_at_1", "value": 23.416}, {"type": "recall_at_10", "value": 50.063}, {"type": "recall_at_100", "value": 75.4}, {"type": "recall_at_1000", "value": 91.74799999999999}, {"type": "recall_at_3", "value": 35.113}, {"type": "recall_at_5", "value": 42.620999999999995}, {"type": "map_at_1", "value": 18.891}, {"type": "map_at_10", "value": 28.000000000000004}, {"type": "map_at_100", "value": 29.354999999999997}, {"type": "map_at_1000", "value": 29.453000000000003}, {"type": "map_at_3", "value": 24.551000000000002}, {"type": "map_at_5", "value": 26.383000000000003}, {"type": "mrr_at_1", "value": 23.402}, {"type": "mrr_at_10", "value": 32.308}, {"type": "mrr_at_100", "value": 33.242}, {"type": "mrr_at_1000", "value": 33.294000000000004}, {"type": "mrr_at_3", "value": 29.262}, {"type": "mrr_at_5", "value": 30.997000000000003}, {"type": "ndcg_at_1", "value": 23.402}, {"type": "ndcg_at_10", "value": 33.932}, {"type": "ndcg_at_100", "value": 39.925}, {"type": "ndcg_at_1000", "value": 42.126999999999995}, {"type": "ndcg_at_3", "value": 27.816999999999997}, {"type": "ndcg_at_5", "value": 30.554}, {"type": "precision_at_1", "value": 23.402}, {"type": "precision_at_10", "value": 6.747}, {"type": "precision_at_100", "value": 1.147}, {"type": "precision_at_1000", "value": 0.15}, {"type": "precision_at_3", "value": 13.469999999999999}, {"type": "precision_at_5", "value": 10.32}, {"type": "recall_at_1", "value": 18.891}, {"type": "recall_at_10", "value": 47.58}, {"type": "recall_at_100", "value": 73.668}, {"type": "recall_at_1000", "value": 88.77000000000001}, {"type": "recall_at_3", "value": 30.726}, {"type": "recall_at_5", "value": 37.547000000000004}, {"type": "map_at_1", "value": 20.303499999999996}, {"type": "map_at_10", "value": 28.263499999999997}, {"type": "map_at_100", "value": 29.431250000000002}, {"type": "map_at_1000", "value": 29.555166666666665}, {"type": "map_at_3", "value": 25.59133333333333}, {"type": "map_at_5", "value": 27.091500000000003}, {"type": "mrr_at_1", "value": 24.19725}, {"type": "mrr_at_10", "value": 31.803750000000004}, {"type": "mrr_at_100", "value": 32.691916666666664}, {"type": "mrr_at_1000", "value": 32.760083333333334}, {"type": "mrr_at_3", "value": 29.447749999999996}, {"type": "mrr_at_5", "value": 30.79858333333334}, {"type": "ndcg_at_1", "value": 24.19725}, {"type": "ndcg_at_10", "value": 33.11925000000001}, {"type": "ndcg_at_100", "value": 38.384916666666655}, {"type": "ndcg_at_1000", "value": 40.991499999999995}, {"type": "ndcg_at_3", "value": 28.5115}, {"type": "ndcg_at_5", "value": 30.718833333333333}, {"type": "precision_at_1", "value": 24.19725}, {"type": "precision_at_10", "value": 6.061666666666666}, {"type": "precision_at_100", "value": 1.0404166666666665}, {"type": "precision_at_1000", "value": 0.14583333333333337}, {"type": "precision_at_3", "value": 13.347083333333334}, {"type": "precision_at_5", "value": 9.747916666666667}, {"type": "recall_at_1", "value": 20.303499999999996}, {"type": "recall_at_10", "value": 43.93183333333334}, {"type": "recall_at_100", "value": 67.47800000000001}, {"type": "recall_at_1000", "value": 85.91425000000001}, {"type": "recall_at_3", "value": 31.160083333333333}, {"type": "recall_at_5", "value": 36.76633333333333}, {"type": "map_at_1", "value": 12.666}, {"type": "map_at_10", "value": 18.448999999999998}, {"type": "map_at_100", "value": 19.448}, {"type": "map_at_1000", "value": 19.54}, {"type": "map_at_3", "value": 16.581000000000003}, {"type": "map_at_5", "value": 17.485999999999997}, {"type": "mrr_at_1", "value": 14.11}, {"type": "mrr_at_10", "value": 19.796}, {"type": "mrr_at_100", "value": 20.785999999999998}, {"type": "mrr_at_1000", "value": 20.861}, {"type": "mrr_at_3", "value": 18.175}, {"type": "mrr_at_5", "value": 18.926000000000002}, {"type": "ndcg_at_1", "value": 14.11}, {"type": "ndcg_at_10", "value": 21.83}, {"type": "ndcg_at_100", "value": 27.017999999999997}, {"type": "ndcg_at_1000", "value": 29.520999999999997}, {"type": "ndcg_at_3", "value": 18.358}, {"type": "ndcg_at_5", "value": 19.719}, {"type": "precision_at_1", "value": 14.11}, {"type": "precision_at_10", "value": 3.819}, {"type": "precision_at_100", "value": 0.701}, {"type": "precision_at_1000", "value": 0.097}, {"type": "precision_at_3", "value": 8.384}, {"type": "precision_at_5", "value": 5.92}, {"type": "recall_at_1", "value": 12.666}, {"type": "recall_at_10", "value": 30.746000000000002}, {"type": "recall_at_100", "value": 54.675}, {"type": "recall_at_1000", "value": 73.57900000000001}, {"type": "recall_at_3", "value": 21.196}, {"type": "recall_at_5", "value": 24.552}, {"type": "map_at_1", "value": 12.53}, {"type": "map_at_10", "value": 17.881}, {"type": "map_at_100", "value": 18.923000000000002}, {"type": "map_at_1000", "value": 19.049}, {"type": "map_at_3", "value": 16.088}, {"type": "map_at_5", "value": 17.0}, {"type": "mrr_at_1", "value": 15.244}, {"type": "mrr_at_10", "value": 20.906}, {"type": "mrr_at_100", "value": 21.83}, {"type": "mrr_at_1000", "value": 21.913}, {"type": "mrr_at_3", "value": 19.104}, {"type": "mrr_at_5", "value": 19.994999999999997}, {"type": "ndcg_at_1", "value": 15.244}, {"type": "ndcg_at_10", "value": 21.541}, {"type": "ndcg_at_100", "value": 26.799}, {"type": "ndcg_at_1000", "value": 29.927}, {"type": "ndcg_at_3", "value": 18.208}, {"type": "ndcg_at_5", "value": 19.573999999999998}, {"type": "precision_at_1", "value": 15.244}, {"type": "precision_at_10", "value": 4.04}, {"type": "precision_at_100", "value": 0.808}, {"type": "precision_at_1000", "value": 0.125}, {"type": "precision_at_3", "value": 8.672}, {"type": "precision_at_5", "value": 6.283999999999999}, {"type": "recall_at_1", "value": 12.53}, {"type": "recall_at_10", "value": 29.601}, {"type": "recall_at_100", "value": 53.615}, {"type": "recall_at_1000", "value": 76.344}, {"type": "recall_at_3", "value": 20.159}, {"type": "recall_at_5", "value": 23.746000000000002}, {"type": "map_at_1", "value": 21.849}, {"type": "map_at_10", "value": 28.937}, {"type": "map_at_100", "value": 30.003999999999998}, {"type": "map_at_1000", "value": 30.122}, {"type": "map_at_3", "value": 26.150000000000002}, {"type": "map_at_5", "value": 27.744000000000003}, {"type": "mrr_at_1", "value": 25.093}, {"type": "mrr_at_10", "value": 32.143}, {"type": "mrr_at_100", "value": 33.053}, {"type": "mrr_at_1000", "value": 33.134}, {"type": "mrr_at_3", "value": 29.586000000000002}, {"type": "mrr_at_5", "value": 31.116}, {"type": "ndcg_at_1", "value": 25.093}, {"type": "ndcg_at_10", "value": 33.631}, {"type": "ndcg_at_100", "value": 38.893}, {"type": "ndcg_at_1000", "value": 41.692}, {"type": "ndcg_at_3", "value": 28.497}, {"type": "ndcg_at_5", "value": 31.028}, {"type": "precision_at_1", "value": 25.093}, {"type": "precision_at_10", "value": 5.765}, {"type": "precision_at_100", "value": 0.947}, {"type": "precision_at_1000", "value": 0.13}, {"type": "precision_at_3", "value": 12.623999999999999}, {"type": "precision_at_5", "value": 9.347}, {"type": "recall_at_1", "value": 21.849}, {"type": "recall_at_10", "value": 44.767}, {"type": "recall_at_100", "value": 68.298}, {"type": "recall_at_1000", "value": 88.107}, {"type": "recall_at_3", "value": 30.968}, {"type": "recall_at_5", "value": 37.19}, {"type": "map_at_1", "value": 18.409}, {"type": "map_at_10", "value": 27.750999999999998}, {"type": "map_at_100", "value": 29.241}, {"type": "map_at_1000", "value": 29.467}, {"type": "map_at_3", "value": 24.29}, {"type": "map_at_5", "value": 26.448}, {"type": "mrr_at_1", "value": 22.53}, {"type": "mrr_at_10", "value": 31.887999999999998}, {"type": "mrr_at_100", "value": 32.89}, {"type": "mrr_at_1000", "value": 32.956}, {"type": "mrr_at_3", "value": 28.854000000000003}, {"type": "mrr_at_5", "value": 30.751}, {"type": "ndcg_at_1", "value": 22.53}, {"type": "ndcg_at_10", "value": 33.827}, {"type": "ndcg_at_100", "value": 39.749}, {"type": "ndcg_at_1000", "value": 42.677}, {"type": "ndcg_at_3", "value": 28.101}, {"type": "ndcg_at_5", "value": 31.380999999999997}, {"type": "precision_at_1", "value": 22.53}, {"type": "precision_at_10", "value": 6.976}, {"type": "precision_at_100", "value": 1.443}, {"type": "precision_at_1000", "value": 0.23700000000000002}, {"type": "precision_at_3", "value": 13.966000000000001}, {"type": "precision_at_5", "value": 10.909}, {"type": "recall_at_1", "value": 18.409}, {"type": "recall_at_10", "value": 46.217000000000006}, {"type": "recall_at_100", "value": 72.882}, {"type": "recall_at_1000", "value": 91.625}, {"type": "recall_at_3", "value": 30.64}, {"type": "recall_at_5", "value": 38.948}, {"type": "map_at_1", "value": 15.875}, {"type": "map_at_10", "value": 21.484}, {"type": "map_at_100", "value": 22.367}, {"type": "map_at_1000", "value": 22.481}, {"type": "map_at_3", "value": 19.686}, {"type": "map_at_5", "value": 20.589}, {"type": "mrr_at_1", "value": 17.560000000000002}, {"type": "mrr_at_10", "value": 23.474999999999998}, {"type": "mrr_at_100", "value": 24.331}, {"type": "mrr_at_1000", "value": 24.426000000000002}, {"type": "mrr_at_3", "value": 21.688}, {"type": "mrr_at_5", "value": 22.603}, {"type": "ndcg_at_1", "value": 17.560000000000002}, {"type": "ndcg_at_10", "value": 25.268}, {"type": "ndcg_at_100", "value": 29.869}, {"type": "ndcg_at_1000", "value": 33.145}, {"type": "ndcg_at_3", "value": 21.622}, {"type": "ndcg_at_5", "value": 23.155}, {"type": "precision_at_1", "value": 17.560000000000002}, {"type": "precision_at_10", "value": 4.067}, {"type": "precision_at_100", "value": 0.6709999999999999}, {"type": "precision_at_1000", "value": 0.10300000000000001}, {"type": "precision_at_3", "value": 9.489}, {"type": "precision_at_5", "value": 6.617000000000001}, {"type": "recall_at_1", "value": 15.875}, {"type": "recall_at_10", "value": 35.064}, {"type": "recall_at_100", "value": 56.857}, {"type": "recall_at_1000", "value": 81.91199999999999}, {"type": "recall_at_3", "value": 24.823999999999998}, {"type": "recall_at_5", "value": 28.62}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 10.637}, {"type": "map_at_10", "value": 18.401999999999997}, {"type": "map_at_100", "value": 20.121}, {"type": "map_at_1000", "value": 20.305999999999997}, {"type": "map_at_3", "value": 15.348}, {"type": "map_at_5", "value": 16.841}, {"type": "mrr_at_1", "value": 23.909}, {"type": "mrr_at_10", "value": 34.512}, {"type": "mrr_at_100", "value": 35.485}, {"type": "mrr_at_1000", "value": 35.528999999999996}, {"type": "mrr_at_3", "value": 31.368000000000002}, {"type": "mrr_at_5", "value": 33.137}, {"type": "ndcg_at_1", "value": 23.909}, {"type": "ndcg_at_10", "value": 25.94}, {"type": "ndcg_at_100", "value": 33.116}, {"type": "ndcg_at_1000", "value": 36.502}, {"type": "ndcg_at_3", "value": 21.046}, {"type": "ndcg_at_5", "value": 22.715}, {"type": "precision_at_1", "value": 23.909}, {"type": "precision_at_10", "value": 8.195}, {"type": "precision_at_100", "value": 1.593}, {"type": "precision_at_1000", "value": 0.22200000000000003}, {"type": "precision_at_3", "value": 15.744}, {"type": "precision_at_5", "value": 12.142999999999999}, {"type": "recall_at_1", "value": 10.637}, {"type": "recall_at_10", "value": 31.251}, {"type": "recall_at_100", "value": 56.477999999999994}, {"type": "recall_at_1000", "value": 75.52600000000001}, {"type": "recall_at_3", "value": 19.482}, {"type": "recall_at_5", "value": 24.145}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 7.786999999999999}, {"type": "map_at_10", "value": 16.182}, {"type": "map_at_100", "value": 22.698}, {"type": "map_at_1000", "value": 24.192}, {"type": "map_at_3", "value": 11.84}, {"type": "map_at_5", "value": 13.602}, {"type": "mrr_at_1", "value": 56.99999999999999}, {"type": "mrr_at_10", "value": 66.702}, {"type": "mrr_at_100", "value": 67.291}, {"type": "mrr_at_1000", "value": 67.301}, {"type": "mrr_at_3", "value": 64.708}, {"type": "mrr_at_5", "value": 65.946}, {"type": "ndcg_at_1", "value": 46.75}, {"type": "ndcg_at_10", "value": 35.469}, {"type": "ndcg_at_100", "value": 40.077}, {"type": "ndcg_at_1000", "value": 47.252}, {"type": "ndcg_at_3", "value": 39.096}, {"type": "ndcg_at_5", "value": 36.766}, {"type": "precision_at_1", "value": 56.99999999999999}, {"type": "precision_at_10", "value": 28.175}, {"type": "precision_at_100", "value": 9.423}, {"type": "precision_at_1000", "value": 2.017}, {"type": "precision_at_3", "value": 41.667}, {"type": "precision_at_5", "value": 35.199999999999996}, {"type": "recall_at_1", "value": 7.786999999999999}, {"type": "recall_at_10", "value": 21.428}, {"type": "recall_at_100", "value": 45.86}, {"type": "recall_at_1000", "value": 68.83}, {"type": "recall_at_3", "value": 12.992}, {"type": "recall_at_5", "value": 16.091}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 45.985}, {"type": "f1", "value": 39.52034839578244}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 39.141999999999996}, {"type": "map_at_10", "value": 50.255}, {"type": "map_at_100", "value": 50.938}, {"type": "map_at_1000", "value": 50.975}, {"type": "map_at_3", "value": 47.4}, {"type": "map_at_5", "value": 49.172}, {"type": "mrr_at_1", "value": 41.794}, {"type": "mrr_at_10", "value": 53.198}, {"type": "mrr_at_100", "value": 53.82900000000001}, {"type": "mrr_at_1000", "value": 53.857}, {"type": "mrr_at_3", "value": 50.32}, {"type": "mrr_at_5", "value": 52.105999999999995}, {"type": "ndcg_at_1", "value": 41.794}, {"type": "ndcg_at_10", "value": 56.411}, {"type": "ndcg_at_100", "value": 59.663}, {"type": "ndcg_at_1000", "value": 60.590999999999994}, {"type": "ndcg_at_3", "value": 50.73}, {"type": "ndcg_at_5", "value": 53.823}, {"type": "precision_at_1", "value": 41.794}, {"type": "precision_at_10", "value": 7.9159999999999995}, {"type": "precision_at_100", "value": 0.968}, {"type": "precision_at_1000", "value": 0.106}, {"type": "precision_at_3", "value": 20.627000000000002}, {"type": "precision_at_5", "value": 14.038}, {"type": "recall_at_1", "value": 39.141999999999996}, {"type": "recall_at_10", "value": 72.695}, {"type": "recall_at_100", "value": 87.44800000000001}, {"type": "recall_at_1000", "value": 94.313}, {"type": "recall_at_3", "value": 57.415000000000006}, {"type": "recall_at_5", "value": 64.851}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 18.715}, {"type": "map_at_10", "value": 30.253999999999998}, {"type": "map_at_100", "value": 32.123000000000005}, {"type": "map_at_1000", "value": 32.303}, {"type": "map_at_3", "value": 26.203}, {"type": "map_at_5", "value": 28.585}, {"type": "mrr_at_1", "value": 36.42}, {"type": "mrr_at_10", "value": 45.456}, {"type": "mrr_at_100", "value": 46.314}, {"type": "mrr_at_1000", "value": 46.356}, {"type": "mrr_at_3", "value": 42.798}, {"type": "mrr_at_5", "value": 44.365}, {"type": "ndcg_at_1", "value": 36.42}, {"type": "ndcg_at_10", "value": 37.747}, {"type": "ndcg_at_100", "value": 44.714999999999996}, {"type": "ndcg_at_1000", "value": 47.866}, {"type": "ndcg_at_3", "value": 34.166999999999994}, {"type": "ndcg_at_5", "value": 35.54}, {"type": "precision_at_1", "value": 36.42}, {"type": "precision_at_10", "value": 10.602}, {"type": "precision_at_100", "value": 1.773}, {"type": "precision_at_1000", "value": 0.234}, {"type": "precision_at_3", "value": 22.84}, {"type": "precision_at_5", "value": 17.315}, {"type": "recall_at_1", "value": 18.715}, {"type": "recall_at_10", "value": 44.199}, {"type": "recall_at_100", "value": 70.097}, {"type": "recall_at_1000", "value": 89.13600000000001}, {"type": "recall_at_3", "value": 30.543}, {"type": "recall_at_5", "value": 36.705}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 30.608}, {"type": "map_at_10", "value": 45.829}, {"type": "map_at_100", "value": 46.786}, {"type": "map_at_1000", "value": 46.869}, {"type": "map_at_3", "value": 42.834}, {"type": "map_at_5", "value": 44.566}, {"type": "mrr_at_1", "value": 61.214999999999996}, {"type": "mrr_at_10", "value": 69.072}, {"type": "mrr_at_100", "value": 69.492}, {"type": "mrr_at_1000", "value": 69.512}, {"type": "mrr_at_3", "value": 67.553}, {"type": "mrr_at_5", "value": 68.446}, {"type": "ndcg_at_1", "value": 61.214999999999996}, {"type": "ndcg_at_10", "value": 54.66}, {"type": "ndcg_at_100", "value": 58.342000000000006}, {"type": "ndcg_at_1000", "value": 60.101000000000006}, {"type": "ndcg_at_3", "value": 49.932}, {"type": "ndcg_at_5", "value": 52.342999999999996}, {"type": "precision_at_1", "value": 61.214999999999996}, {"type": "precision_at_10", "value": 11.65}, {"type": "precision_at_100", "value": 1.4529999999999998}, {"type": "precision_at_1000", "value": 0.169}, {"type": "precision_at_3", "value": 31.78}, {"type": "precision_at_5", "value": 20.979999999999997}, {"type": "recall_at_1", "value": 30.608}, {"type": "recall_at_10", "value": 58.251}, {"type": "recall_at_100", "value": 72.667}, {"type": "recall_at_1000", "value": 84.396}, {"type": "recall_at_3", "value": 47.67}, {"type": "recall_at_5", "value": 52.451}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 90.21999999999998}, {"type": "ap", "value": 85.88889163834975}, {"type": "f1", "value": 90.20542534971861}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.785}, {"type": "map_at_10", "value": 31.596000000000004}, {"type": "map_at_100", "value": 32.849000000000004}, {"type": "map_at_1000", "value": 32.903999999999996}, {"type": "map_at_3", "value": 27.772000000000002}, {"type": "map_at_5", "value": 29.952}, {"type": "mrr_at_1", "value": 20.344}, {"type": "mrr_at_10", "value": 32.146}, {"type": "mrr_at_100", "value": 33.349000000000004}, {"type": "mrr_at_1000", "value": 33.396}, {"type": "mrr_at_3", "value": 28.403}, {"type": "mrr_at_5", "value": 30.542}, {"type": "ndcg_at_1", "value": 20.358}, {"type": "ndcg_at_10", "value": 38.288}, {"type": "ndcg_at_100", "value": 44.383}, {"type": "ndcg_at_1000", "value": 45.714}, {"type": "ndcg_at_3", "value": 30.525999999999996}, {"type": "ndcg_at_5", "value": 34.393}, {"type": "precision_at_1", "value": 20.358}, {"type": "precision_at_10", "value": 6.16}, {"type": "precision_at_100", "value": 0.9209999999999999}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 13.08}, {"type": "precision_at_5", "value": 9.799}, {"type": "recall_at_1", "value": 19.785}, {"type": "recall_at_10", "value": 58.916000000000004}, {"type": "recall_at_100", "value": 87.24}, {"type": "recall_at_1000", "value": 97.37599999999999}, {"type": "recall_at_3", "value": 37.872}, {"type": "recall_at_5", "value": 47.116}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 88.63429092567262}, {"type": "f1", "value": 88.58612904162257}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 58.080255357957135}, {"type": "f1", "value": 39.561402859935}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 65.03026227303296}, {"type": "f1", "value": 61.10334739098155}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.05245460659046}, {"type": "f1", "value": 69.96280851244295}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 33.9762359299763}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 31.670044418802444}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 29.32330726926572}, {"type": "mrr", "value": 30.16727607430052}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.552}, {"type": "map_at_10", "value": 10.692}, {"type": "map_at_100", "value": 13.835}, {"type": "map_at_1000", "value": 15.305}, {"type": "map_at_3", "value": 7.5009999999999994}, {"type": "map_at_5", "value": 8.988}, {"type": "mrr_at_1", "value": 39.318999999999996}, {"type": "mrr_at_10", "value": 48.809000000000005}, {"type": "mrr_at_100", "value": 49.382}, {"type": "mrr_at_1000", "value": 49.442}, {"type": "mrr_at_3", "value": 46.078}, {"type": "mrr_at_5", "value": 48.091}, {"type": "ndcg_at_1", "value": 37.152}, {"type": "ndcg_at_10", "value": 30.159000000000002}, {"type": "ndcg_at_100", "value": 28.371000000000002}, {"type": "ndcg_at_1000", "value": 37.632}, {"type": "ndcg_at_3", "value": 34.662}, {"type": "ndcg_at_5", "value": 32.814}, {"type": "precision_at_1", "value": 38.7}, {"type": "precision_at_10", "value": 23.034}, {"type": "precision_at_100", "value": 7.588}, {"type": "precision_at_1000", "value": 2.0709999999999997}, {"type": "precision_at_3", "value": 33.024}, {"type": "precision_at_5", "value": 29.164}, {"type": "recall_at_1", "value": 4.552}, {"type": "recall_at_10", "value": 14.827000000000002}, {"type": "recall_at_100", "value": 29.256}, {"type": "recall_at_1000", "value": 61.739}, {"type": "recall_at_3", "value": 8.38}, {"type": "recall_at_5", "value": 11.123}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 25.424999999999997}, {"type": "map_at_10", "value": 39.972}, {"type": "map_at_100", "value": 41.163}, {"type": "map_at_1000", "value": 41.202}, {"type": "map_at_3", "value": 35.546}, {"type": "map_at_5", "value": 38.146}, {"type": "mrr_at_1", "value": 28.794999999999998}, {"type": "mrr_at_10", "value": 42.315999999999995}, {"type": "mrr_at_100", "value": 43.253}, {"type": "mrr_at_1000", "value": 43.282}, {"type": "mrr_at_3", "value": 38.649}, {"type": "mrr_at_5", "value": 40.858}, {"type": "ndcg_at_1", "value": 28.766000000000002}, {"type": "ndcg_at_10", "value": 47.614000000000004}, {"type": "ndcg_at_100", "value": 52.676}, {"type": "ndcg_at_1000", "value": 53.574}, {"type": "ndcg_at_3", "value": 39.292}, {"type": "ndcg_at_5", "value": 43.633}, {"type": "precision_at_1", "value": 28.766000000000002}, {"type": "precision_at_10", "value": 8.201}, {"type": "precision_at_100", "value": 1.099}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 18.201999999999998}, {"type": "precision_at_5", "value": 13.447000000000001}, {"type": "recall_at_1", "value": 25.424999999999997}, {"type": "recall_at_10", "value": 68.586}, {"type": "recall_at_100", "value": 90.556}, {"type": "recall_at_1000", "value": 97.197}, {"type": "recall_at_3", "value": 47.033}, {"type": "recall_at_5", "value": 57.044}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.054}, {"type": "map_at_10", "value": 83.991}, {"type": "map_at_100", "value": 84.63000000000001}, {"type": "map_at_1000", "value": 84.648}, {"type": "map_at_3", "value": 80.982}, {"type": "map_at_5", "value": 82.857}, {"type": "mrr_at_1", "value": 80.76}, {"type": "mrr_at_10", "value": 87.079}, {"type": "mrr_at_100", "value": 87.185}, {"type": "mrr_at_1000", "value": 87.18599999999999}, {"type": "mrr_at_3", "value": 86.03}, {"type": "mrr_at_5", "value": 86.771}, {"type": "ndcg_at_1", "value": 80.75}, {"type": "ndcg_at_10", "value": 87.85300000000001}, {"type": "ndcg_at_100", "value": 89.105}, {"type": "ndcg_at_1000", "value": 89.213}, {"type": "ndcg_at_3", "value": 84.87400000000001}, {"type": "ndcg_at_5", "value": 86.51299999999999}, {"type": "precision_at_1", "value": 80.75}, {"type": "precision_at_10", "value": 13.352}, {"type": "precision_at_100", "value": 1.528}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.113}, {"type": "precision_at_5", "value": 24.424}, {"type": "recall_at_1", "value": 70.054}, {"type": "recall_at_10", "value": 95.209}, {"type": "recall_at_100", "value": 99.497}, {"type": "recall_at_1000", "value": 99.973}, {"type": "recall_at_3", "value": 86.654}, {"type": "recall_at_5", "value": 91.313}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 42.71909082787674}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 56.92567540870805}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.225}, {"type": "map_at_10", "value": 5.785}, {"type": "map_at_100", "value": 7.6240000000000006}, {"type": "map_at_1000", "value": 8.094999999999999}, {"type": "map_at_3", "value": 3.882}, {"type": "map_at_5", "value": 4.715}, {"type": "mrr_at_1", "value": 11.0}, {"type": "mrr_at_10", "value": 18.049}, {"type": "mrr_at_100", "value": 19.475}, {"type": "mrr_at_1000", "value": 19.599}, {"type": "mrr_at_3", "value": 15.082999999999998}, {"type": "mrr_at_5", "value": 16.583000000000002}, {"type": "ndcg_at_1", "value": 11.0}, {"type": "ndcg_at_10", "value": 10.59}, {"type": "ndcg_at_100", "value": 18.68}, {"type": "ndcg_at_1000", "value": 27.327}, {"type": "ndcg_at_3", "value": 8.932}, {"type": "ndcg_at_5", "value": 8.126}, {"type": "precision_at_1", "value": 11.0}, {"type": "precision_at_10", "value": 5.89}, {"type": "precision_at_100", "value": 1.778}, {"type": "precision_at_1000", "value": 0.385}, {"type": "precision_at_3", "value": 8.333}, {"type": "precision_at_5", "value": 7.3}, {"type": "recall_at_1", "value": 2.225}, {"type": "recall_at_10", "value": 11.948}, {"type": "recall_at_100", "value": 36.097}, {"type": "recall_at_1000", "value": 78.145}, {"type": "recall_at_3", "value": 5.078}, {"type": "recall_at_5", "value": 7.4079999999999995}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.87898494199837}, {"type": "cos_sim_spearman", "value": 79.3815141247343}, {"type": "euclidean_pearson", "value": 80.984944764735}, {"type": "euclidean_spearman", "value": 79.37984688714191}, {"type": "manhattan_pearson", "value": 80.96139326762788}, {"type": "manhattan_spearman", "value": 79.34882764221987}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.94123934276303}, {"type": "cos_sim_spearman", "value": 73.64821774752144}, {"type": "euclidean_pearson", "value": 79.09149672589201}, {"type": "euclidean_spearman", "value": 73.64174833442063}, {"type": "manhattan_pearson", "value": 79.05135129686983}, {"type": "manhattan_spearman", "value": 73.57858840270084}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 71.37047316191514}, {"type": "cos_sim_spearman", "value": 75.56797051373606}, {"type": "euclidean_pearson", "value": 74.59038333631109}, {"type": "euclidean_spearman", "value": 75.55966023907652}, {"type": "manhattan_pearson", "value": 74.56600039917967}, {"type": "manhattan_spearman", "value": 75.52139454559969}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 71.75410054949431}, {"type": "cos_sim_spearman", "value": 72.09826786050286}, {"type": "euclidean_pearson", "value": 72.30015801748517}, {"type": "euclidean_spearman", "value": 72.09347126863909}, {"type": "manhattan_pearson", "value": 72.2692656804079}, {"type": "manhattan_spearman", "value": 72.07403601010577}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.09663528706463}, {"type": "cos_sim_spearman", "value": 85.6296813586495}, {"type": "euclidean_pearson", "value": 84.14347920777777}, {"type": "euclidean_spearman", "value": 85.62948425849926}, {"type": "manhattan_pearson", "value": 84.08840896634038}, {"type": "manhattan_spearman", "value": 85.56264430897471}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 78.55984417539631}, {"type": "cos_sim_spearman", "value": 82.06700938579174}, {"type": "euclidean_pearson", "value": 80.92277218507344}, {"type": "euclidean_spearman", "value": 82.06297899287695}, {"type": "manhattan_pearson", "value": 80.89292734584946}, {"type": "manhattan_spearman", "value": 82.01121177547141}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.70738419575085}, {"type": "cos_sim_spearman", "value": 88.99910283221313}, {"type": "euclidean_pearson", "value": 88.91458218447116}, {"type": "euclidean_spearman", "value": 88.97188755639708}, {"type": "manhattan_pearson", "value": 88.93397958768632}, {"type": "manhattan_spearman", "value": 89.0514960821245}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 65.30101408630514}, {"type": "cos_sim_spearman", "value": 66.15672143838582}, {"type": "euclidean_pearson", "value": 66.61257552376895}, {"type": "euclidean_spearman", "value": 66.00319920690566}, {"type": "manhattan_pearson", "value": 66.81435622246758}, {"type": "manhattan_spearman", "value": 66.35221377631379}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.94191078286725}, {"type": "cos_sim_spearman", "value": 83.69085688689903}, {"type": "euclidean_pearson", "value": 83.28942607749994}, {"type": "euclidean_spearman", "value": 83.69370814043747}, {"type": "manhattan_pearson", "value": 83.3553242227074}, {"type": "manhattan_spearman", "value": 83.74306572840383}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 88.02503921524934}, {"type": "mrr", "value": 96.47891777793738}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 51.24999999999999}, {"type": "map_at_10", "value": 61.472}, {"type": "map_at_100", "value": 62.132}, {"type": "map_at_1000", "value": 62.161}, {"type": "map_at_3", "value": 58.18299999999999}, {"type": "map_at_5", "value": 60.246}, {"type": "mrr_at_1", "value": 54.0}, {"type": "mrr_at_10", "value": 62.395}, {"type": "mrr_at_100", "value": 62.936}, {"type": "mrr_at_1000", "value": 62.965}, {"type": "mrr_at_3", "value": 59.833000000000006}, {"type": "mrr_at_5", "value": 61.5}, {"type": "ndcg_at_1", "value": 54.0}, {"type": "ndcg_at_10", "value": 66.235}, {"type": "ndcg_at_100", "value": 69.279}, {"type": "ndcg_at_1000", "value": 70.044}, {"type": "ndcg_at_3", "value": 60.679}, {"type": "ndcg_at_5", "value": 63.80200000000001}, {"type": "precision_at_1", "value": 54.0}, {"type": "precision_at_10", "value": 9.167}, {"type": "precision_at_100", "value": 1.0699999999999998}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 24.111}, {"type": "precision_at_5", "value": 16.333000000000002}, {"type": "recall_at_1", "value": 51.24999999999999}, {"type": "recall_at_10", "value": 79.833}, {"type": "recall_at_100", "value": 94.0}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 65.267}, {"type": "recall_at_5", "value": 72.956}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.62673267326733}, {"type": "cos_sim_ap", "value": 87.07482534376774}, {"type": "cos_sim_f1", "value": 80.63687724704674}, {"type": "cos_sim_precision", "value": 82.89334741288279}, {"type": "cos_sim_recall", "value": 78.5}, {"type": "dot_accuracy", "value": 99.63564356435643}, {"type": "dot_ap", "value": 86.98432756163903}, {"type": "dot_f1", "value": 80.91286307053943}, {"type": "dot_precision", "value": 84.05172413793103}, {"type": "dot_recall", "value": 78.0}, {"type": "euclidean_accuracy", "value": 99.62673267326733}, {"type": "euclidean_ap", "value": 87.0756316041764}, {"type": "euclidean_f1", "value": 80.53553038105046}, {"type": "euclidean_precision", "value": 83.01486199575372}, {"type": "euclidean_recall", "value": 78.2}, {"type": "manhattan_accuracy", "value": 99.62574257425743}, {"type": "manhattan_ap", "value": 87.05953308523233}, {"type": "manhattan_f1", "value": 80.50632911392405}, {"type": "manhattan_precision", "value": 81.53846153846153}, {"type": "manhattan_recall", "value": 79.5}, {"type": "max_accuracy", "value": 99.63564356435643}, {"type": "max_ap", "value": 87.0756316041764}, {"type": "max_f1", "value": 80.91286307053943}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 53.59692640735744}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 32.86771187657918}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 45.705711066037644}, {"type": "mrr", "value": 46.25163133435192}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.066382997227624}, {"type": "cos_sim_spearman", "value": 31.00934876843689}, {"type": "dot_pearson", "value": 30.419206995727873}, {"type": "dot_spearman", "value": 31.046571150093747}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.173}, {"type": "map_at_10", "value": 1.154}, {"type": "map_at_100", "value": 5.8180000000000005}, {"type": "map_at_1000", "value": 14.892}, {"type": "map_at_3", "value": 0.415}, {"type": "map_at_5", "value": 0.641}, {"type": "mrr_at_1", "value": 68.0}, {"type": "mrr_at_10", "value": 76.869}, {"type": "mrr_at_100", "value": 77.264}, {"type": "mrr_at_1000", "value": 77.264}, {"type": "mrr_at_3", "value": 75.333}, {"type": "mrr_at_5", "value": 76.333}, {"type": "ndcg_at_1", "value": 62.0}, {"type": "ndcg_at_10", "value": 50.81}, {"type": "ndcg_at_100", "value": 37.659}, {"type": "ndcg_at_1000", "value": 37.444}, {"type": "ndcg_at_3", "value": 55.11200000000001}, {"type": "ndcg_at_5", "value": 51.858000000000004}, {"type": "precision_at_1", "value": 68.0}, {"type": "precision_at_10", "value": 54.800000000000004}, {"type": "precision_at_100", "value": 38.36}, {"type": "precision_at_1000", "value": 16.88}, {"type": "precision_at_3", "value": 57.99999999999999}, {"type": "precision_at_5", "value": 54.800000000000004}, {"type": "recall_at_1", "value": 0.173}, {"type": "recall_at_10", "value": 1.435}, {"type": "recall_at_100", "value": 9.259}, {"type": "recall_at_1000", "value": 36.033}, {"type": "recall_at_3", "value": 0.447}, {"type": "recall_at_5", "value": 0.74}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 1.228}, {"type": "map_at_10", "value": 4.633}, {"type": "map_at_100", "value": 9.171}, {"type": "map_at_1000", "value": 10.58}, {"type": "map_at_3", "value": 2.413}, {"type": "map_at_5", "value": 3.3640000000000003}, {"type": "mrr_at_1", "value": 16.326999999999998}, {"type": "mrr_at_10", "value": 27.071}, {"type": "mrr_at_100", "value": 28.454}, {"type": "mrr_at_1000", "value": 28.475}, {"type": "mrr_at_3", "value": 19.048000000000002}, {"type": "mrr_at_5", "value": 24.354}, {"type": "ndcg_at_1", "value": 14.285999999999998}, {"type": "ndcg_at_10", "value": 13.312}, {"type": "ndcg_at_100", "value": 25.587}, {"type": "ndcg_at_1000", "value": 37.879000000000005}, {"type": "ndcg_at_3", "value": 11.591}, {"type": "ndcg_at_5", "value": 12.536}, {"type": "precision_at_1", "value": 16.326999999999998}, {"type": "precision_at_10", "value": 13.264999999999999}, {"type": "precision_at_100", "value": 6.061}, {"type": "precision_at_1000", "value": 1.4040000000000001}, {"type": "precision_at_3", "value": 12.245000000000001}, {"type": "precision_at_5", "value": 13.877999999999998}, {"type": "recall_at_1", "value": 1.228}, {"type": "recall_at_10", "value": 9.759}, {"type": "recall_at_100", "value": 38.809}, {"type": "recall_at_1000", "value": 76.229}, {"type": "recall_at_3", "value": 2.738}, {"type": "recall_at_5", "value": 5.510000000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 69.33179999999999}, {"type": "ap", "value": 14.379598043710034}, {"type": "f1", "value": 53.89665138084001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 58.245614035087726}, {"type": "f1", "value": 58.3152945231724}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 38.38161204174159}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 82.60118018716099}, {"type": "cos_sim_ap", "value": 62.5064927795416}, {"type": "cos_sim_f1", "value": 59.50177935943061}, {"type": "cos_sim_precision", "value": 54.05172413793103}, {"type": "cos_sim_recall", "value": 66.17414248021109}, {"type": "dot_accuracy", "value": 82.52369315133814}, {"type": "dot_ap", "value": 62.36545569178682}, {"type": "dot_f1", "value": 59.5539204414808}, {"type": "dot_precision", "value": 52.77098614506927}, {"type": "dot_recall", "value": 68.33773087071239}, {"type": "euclidean_accuracy", "value": 82.62502235202956}, {"type": "euclidean_ap", "value": 62.51708062651598}, {"type": "euclidean_f1", "value": 59.48887837198297}, {"type": "euclidean_precision", "value": 53.925353925353924}, {"type": "euclidean_recall", "value": 66.33245382585751}, {"type": "manhattan_accuracy", "value": 82.57733802229242}, {"type": "manhattan_ap", "value": 62.4034159268756}, {"type": "manhattan_f1", "value": 59.42896615242921}, {"type": "manhattan_precision", "value": 52.716503267973856}, {"type": "manhattan_recall", "value": 68.10026385224275}, {"type": "max_accuracy", "value": 82.62502235202956}, {"type": "max_ap", "value": 62.51708062651598}, {"type": "max_f1", "value": 59.5539204414808}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.74079248651377}, {"type": "cos_sim_ap", "value": 81.11128912769627}, {"type": "cos_sim_f1", "value": 73.39903296054331}, {"type": "cos_sim_precision", "value": 70.49273307337823}, {"type": "cos_sim_recall", "value": 76.55528179858331}, {"type": "dot_accuracy", "value": 86.71362595567975}, {"type": "dot_ap", "value": 81.07587927324371}, {"type": "dot_f1", "value": 73.36112443280334}, {"type": "dot_precision", "value": 70.42283447836249}, {"type": "dot_recall", "value": 76.55528179858331}, {"type": "euclidean_accuracy", "value": 86.73109015407304}, {"type": "euclidean_ap", "value": 81.11249921439843}, {"type": "euclidean_f1", "value": 73.39903296054331}, {"type": "euclidean_precision", "value": 70.49273307337823}, {"type": "euclidean_recall", "value": 76.55528179858331}, {"type": "manhattan_accuracy", "value": 86.7252687546086}, {"type": "manhattan_ap", "value": 81.05990290681223}, {"type": "manhattan_f1", "value": 73.29173525245952}, {"type": "manhattan_precision", "value": 72.88161400837457}, {"type": "manhattan_recall", "value": 73.70649830612874}, {"type": "max_accuracy", "value": 86.74079248651377}, {"type": "max_ap", "value": 81.11249921439843}, {"type": "max_f1", "value": 73.39903296054331}]}]}]}
dataset
null
588
rinna/bilingual-gpt-neox-4b-8k
rinna
text-generation
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "ja", "en", "dataset:mc4", "dataset:cc100", "dataset:wikipedia", "dataset:EleutherAI/pile", "dataset:togethercomputer/RedPajama-Data-1T", "arxiv:2306.15595", "arxiv:2404.01657", "base_model:rinna/bilingual-gpt-neox-4b", "base_model:finetune:rinna/bilingual-gpt-neox-4b", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
2023-07-31T02:34:21Z
2025-03-23T11:01:26+00:00
50
23
--- base_model: rinna/bilingual-gpt-neox-4b datasets: - mc4 - cc100 - wikipedia - EleutherAI/pile - togethercomputer/RedPajama-Data-1T language: - ja - en license: mit thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png inference: false --- # bilingual-gpt-neox-4b-8k ![rinna-icon](./rinna.png) # Overview **Notice: This model requires `transformers>=4.31.0` to work properly.** This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 billion parameters. We extend [`rinna/bilingual-gpt-neox-4b`](https://huggingface.co/rinna/bilingual-gpt-neox-4b)'s context length from 2048 to 8192 by fine-tuning on 1.5B extra tokens using [RoPE positional interpolation](https://arxiv.org/abs/2306.15595). * **Library** The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox). * **Model architecture** A 36-layer, 2816-hidden-size transformer-based language model. * **Fine-tuning** The model was trained on long sequences (longer than 4000 tokens) sampled from its pre-training corpora as follows. The fine-tuning data contains **1.5B** tokens in total. - [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) - [Japanese C4](https://huggingface.co/datasets/mc4) - [The Pile](https://huggingface.co/datasets/EleutherAI/pile) - [Redpajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) - [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) * **Model Series** | Variant | Link | | :-- | :--| | Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 | | Bilingual 4B PPO | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo | | Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft | | Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k | | Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b | | Japanese 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo | | Japanese 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 | | Japanese 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft | | Japanese 3.6B | https://huggingface.co/rinna/japanese-gpt-neox-3.6b | * **Contributors** - [Tianyu Zhao](https://huggingface.co/tianyuz) - [Toshiaki Wakatsuki](https://huggingface.co/t-w) - [Akio Kaga](https://huggingface.co/rakaga) - [Koh Mitsuda](https://huggingface.co/mitsu-koh) - [Kei Sawada](https://huggingface.co/keisawada) * **Release date** July 31, 2023 # How to use the model **Notice:** Since the model is **sensitive to decoding hyper-parameters** (e.g. `temperature`, `top_p`, `top_k`, `repetition_penalty`), it is suggested to explore the best setting for your task. ~~~~python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/bilingual-gpt-neox-4b-8k", use_fast=False) model = AutoModelForCausalLM.from_pretrained("rinna/bilingual-gpt-neox-4b-8k") if torch.cuda.is_available(): model = model.to("cuda") text = "Socrates says" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=4000, min_new_tokens=4000, do_sample=True, temperature=1.0, top_p=0.95, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) """ Socrates says that he is not a bad man because the people of his city-state want to kill him. For a just man, if someone gives them their life over, they will lose it by violence. If this happens at the hands of another, that person will be as bad as Plato's Socratic slave, and Socrates will suffer accordingly (B 134 ff). The Stranger's final remark concerns the distinction between knowledge and wisdom. While the Stranger seems to claim that all people can learn moral lessons through observation of how the world actually works, Socrates responds by saying: "What? Am I a skilful painter?" I replied [to his question] (499). "No, indeed I am not, Socrates; but you are one who knows how to paint. You have painted a little picture and I know nothing about art. In this respect what do I know or can learn from you?" (D 1015) Socrates suggests that it makes sense to define the knowledge required of a good person as any skill which we can acquire by observing real behavior. However, there appears to be a problem in this definition: it seems unlikely that everyone can have such a skill. Certainly, if he were able to see the actions of other people, he would understand how we should act, even though his own response to these actions would not necessarily satisfy moral rules. Even less sophisticated people might reasonably conclude that their own actions must conform with accepted moral standards of behavior. Hence, it seems that all people, at least some of us, need _some_ form of training. ## **The nature of education and character** Having set out our ideas of justice and virtue, and the ways in which they relate to political society, Socrates now brings the story of his pupil Phaedrus to a close. He tells Phaedrus that "my teaching you is as simple as that of your own body. If you were to lay it out for yourself, you would not discover its form" (B 287–8). The two men had originally been introduced as students undertaking an exercise called pedagogy. Now, however, Socrates has presented Phaedrus with the idea that his task involves making predictions concerning events yet to come (B 169). A better understanding of these events will be more useful than mere memorization. To achieve this purpose, the young philosopher must be careful not to waste his time doing the unnecessary things that ordinary humans tend to do. Socrates asks Phaedrus whether a good philosopher really needs to do no work. The answer given is "yes", meaning that he does not need to study the classics and develop a philosophical tradition in order to make himself a good person, nor to go through a lengthy course of philosophy and other education. Rather, he should simply practice being an active, creative, and imaginative thinker ( _eikasōma_ ). Such persons are well qualified to judge situations on their own terms, rather than on those provided by tradition (B 296). Once again, Socrates emphasizes the difference between the intellectual achievements which follow naturally from education and those which require intellectual effort alone. When asked whether this sort of education can produce a good man, Socrates replies in the affirmative: "Surely it would appear impossible that someone could attain the most important parts of wisdom, unless he was a student of human affairs" (B 364). Socrates also points out that having been educated properly helps a person to make good choices when faced with difficult decisions: So for this same reason, if you did not take up your craft with me, that is, your profession, when you were young, you would not be a fit person to judge how you ought to vote; because you would not consider each thing in accordance with its true nature" (B 366). As Plato often asserts throughout the _Apology_, Socrates regards learning as essential to the acquisition of wisdom but education can never substitute for the inborn capacities of a child. This is not to say that children lack wisdom or that they cannot mature. Indeed, Socrates explains that education is sometimes needed even by individuals who can solve problems for themselves (B 343–67), and Socrates later refers to this activity (C 738 ff) as _technēsēs_. However, there is always something special about childhood initiating certain capacities. We usually give up the right to participate in education at puberty so as to prepare us for adult life, for example, without being informed that our bodies and intelligence can also grow old (B 1165–70). ## **Socrates's defence of democracy and Socratic method** Following a lengthy description of Socrates's educational programme, Plato moves directly into the matter of democratic politics and citizenship in Book III. On the first day of the trial, Socrates takes up the theme of democracy once again: "For you are looking for this thing, my friends, that is to say, the good citizenship to which every person stands entitled" (389). Before continuing, Socrates introduces three principles that he believes form the very heart of good citizenship: the good gods, respect for nature, and love of beauty. Socrates describes these principles in various ways: 1. All citizens of a democracy are expected to behave honourably (390). The citizen should avoid doing anything harmful (to others or to himself) and everything good. There is therefore no way to avoid acting dishonourably (391); but no one can avoid harming himself, for his actions will harm the community as a whole (392–5). 2. Each individual is equally in a position of power and authority, and this means that the citizens must share responsibility for the good government of the state (395). 3. Respect for nature means that citizens will observe that both laws of nature and the opinions of other people control their actions, so that they must choose between the best available alternatives. Anyone who fails to adopt reasoned opinion will be wrong in principle (399). This entails that citizens will have to choose among the best policies that prevail within the community (ibid.). So, while the citizens will have authority and power, this only exists so long as the laws and opinions of which they approve prevail in general over those of which they disapprove. The only way they can get any power at all over their fellow-citizens is either through punishment, or through elections. These provide the means by which citizens can express their approval of a policy or disapproval of a policy. The latter occurs when citizens elect the individuals responsible for making the laws. While democracy may be described as a'mixed' government, it is not possible for citizens to choose those whom they wish to vote for (399). Instead, they decide who should have a voice. Those elected speak for themselves, they do not listen to the advice of their colleagues, and ultimately the result will be chosen by the people themselves (399–401). Once again, Socrates is clearly trying to convince his interrogators that the best interests of the city-state depend on giving a larger voice to the public in running its affairs. ## **Plato's reply to Socrates** Plato's rejoinder shows his great skill in dialogue. He presents the argument in familiar forms: analogy, discussion, and so on. Although Socrates makes some valid points at times along the way, he usually finds reasons for disagreeing with the arguments that he offers to support his claims. As he repeatedly does throughout Book II, the Stranger then uses Socrates's own words against him. To begin with, the Stranger dismisses the claim that each person ... """ ~~~~ --- # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. * The tokenizer has a vocabulary size of 65,536. * It uses *byte fallback* to decompose unknown text pieces into UTF-8 byte pieces to avoid producing `<UNK>` tokens. * It can recognize *consecutive whitespaces*, *newlines*, and *tabs* to handle structured texts better. * We turned off the default behaviour of prepending leading whitespace because it is not beneficial for processing Japanese. * Specifically, single whitespace is always processed as one token so that any English word won't have a preceding whitespace like in many other tokenizers (e.g. `_Hello`). * This decision trades the English processing efficiency for a unified way to treat whitespaces. * It leads to a significantly lower loss of next token prediction on English data because whitespaces are easy to predict. * **Don't forget to set `use_fast=False` to make the above features function correctly.** --- # How to cite ```bibtex @misc{rinna-bilingual-gpt-neox-4b-8k, title = {rinna/bilingual-gpt-neox-4b-8k}, author = {Zhao, Tianyu and Wakatsuki, Toshiaki and Kaga, Akio and Mitsuda, Koh and Sawada, Kei}, url = {https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k} } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, pages = {13898--13905}, url = {https://aclanthology.org/2024.lrec-main.1213}, note = {\url{https://arxiv.org/abs/2404.01657}} } ``` --- # Licenese [The MIT license](https://opensource.org/licenses/MIT)
[ "CRAFT" ]
Non_BioNLP
# bilingual-gpt-neox-4b-8k ![rinna-icon](./rinna.png) # Overview **Notice: This model requires `transformers>=4.31.0` to work properly.** This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 billion parameters. We extend [`rinna/bilingual-gpt-neox-4b`](https://huggingface.co/rinna/bilingual-gpt-neox-4b)'s context length from 2048 to 8192 by fine-tuning on 1.5B extra tokens using [RoPE positional interpolation](https://arxiv.org/abs/2306.15595). * **Library** The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox). * **Model architecture** A 36-layer, 2816-hidden-size transformer-based language model. * **Fine-tuning** The model was trained on long sequences (longer than 4000 tokens) sampled from its pre-training corpora as follows. The fine-tuning data contains **1.5B** tokens in total. - [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) - [Japanese C4](https://huggingface.co/datasets/mc4) - [The Pile](https://huggingface.co/datasets/EleutherAI/pile) - [Redpajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) - [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) * **Model Series** | Variant | Link | | :-- | :--| | Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 | | Bilingual 4B PPO | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo | | Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft | | Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k | | Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b | | Japanese 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo | | Japanese 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 | | Japanese 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft | | Japanese 3.6B | https://huggingface.co/rinna/japanese-gpt-neox-3.6b | * **Contributors** - [Tianyu Zhao](https://huggingface.co/tianyuz) - [Toshiaki Wakatsuki](https://huggingface.co/t-w) - [Akio Kaga](https://huggingface.co/rakaga) - [Koh Mitsuda](https://huggingface.co/mitsu-koh) - [Kei Sawada](https://huggingface.co/keisawada) * **Release date** July 31, 2023 # How to use the model **Notice:** Since the model is **sensitive to decoding hyper-parameters** (e.g. `temperature`, `top_p`, `top_k`, `repetition_penalty`), it is suggested to explore the best setting for your task. ~~~~python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/bilingual-gpt-neox-4b-8k", use_fast=False) model = AutoModelForCausalLM.from_pretrained("rinna/bilingual-gpt-neox-4b-8k") if torch.cuda.is_available(): model = model.to("cuda") text = "Socrates says" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=4000, min_new_tokens=4000, do_sample=True, temperature=1.0, top_p=0.95, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) """ Socrates says that he is not a bad man because the people of his city-state want to kill him. For a just man, if someone gives them their life over, they will lose it by violence. If this happens at the hands of another, that person will be as bad as Plato's Socratic slave, and Socrates will suffer accordingly (B 134 ff). The Stranger's final remark concerns the distinction between knowledge and wisdom. While the Stranger seems to claim that all people can learn moral lessons through observation of how the world actually works, Socrates responds by saying: "What? Am I a skilful painter?" I replied [to his question] (499). "No, indeed I am not, Socrates; but you are one who knows how to paint. You have painted a little picture and I know nothing about art. In this respect what do I know or can learn from you?" (D 1015) Socrates suggests that it makes sense to define the knowledge required of a good person as any skill which we can acquire by observing real behavior. However, there appears to be a problem in this definition: it seems unlikely that everyone can have such a skill. Certainly, if he were able to see the actions of other people, he would understand how we should act, even though his own response to these actions would not necessarily satisfy moral rules. Even less sophisticated people might reasonably conclude that their own actions must conform with accepted moral standards of behavior. Hence, it seems that all people, at least some of us, need _some_ form of training. ## **The nature of education and character** Having set out our ideas of justice and virtue, and the ways in which they relate to political society, Socrates now brings the story of his pupil Phaedrus to a close. He tells Phaedrus that "my teaching you is as simple as that of your own body. If you were to lay it out for yourself, you would not discover its form" (B 287–8). The two men had originally been introduced as students undertaking an exercise called pedagogy. Now, however, Socrates has presented Phaedrus with the idea that his task involves making predictions concerning events yet to come (B 169). A better understanding of these events will be more useful than mere memorization. To achieve this purpose, the young philosopher must be careful not to waste his time doing the unnecessary things that ordinary humans tend to do. Socrates asks Phaedrus whether a good philosopher really needs to do no work. The answer given is "yes", meaning that he does not need to study the classics and develop a philosophical tradition in order to make himself a good person, nor to go through a lengthy course of philosophy and other education. Rather, he should simply practice being an active, creative, and imaginative thinker ( _eikasōma_ ). Such persons are well qualified to judge situations on their own terms, rather than on those provided by tradition (B 296). Once again, Socrates emphasizes the difference between the intellectual achievements which follow naturally from education and those which require intellectual effort alone. When asked whether this sort of education can produce a good man, Socrates replies in the affirmative: "Surely it would appear impossible that someone could attain the most important parts of wisdom, unless he was a student of human affairs" (B 364). Socrates also points out that having been educated properly helps a person to make good choices when faced with difficult decisions: So for this same reason, if you did not take up your craft with me, that is, your profession, when you were young, you would not be a fit person to judge how you ought to vote; because you would not consider each thing in accordance with its true nature" (B 366). As Plato often asserts throughout the _Apology_, Socrates regards learning as essential to the acquisition of wisdom but education can never substitute for the inborn capacities of a child. This is not to say that children lack wisdom or that they cannot mature. Indeed, Socrates explains that education is sometimes needed even by individuals who can solve problems for themselves (B 343–67), and Socrates later refers to this activity (C 738 ff) as _technēsēs_. However, there is always something special about childhood initiating certain capacities. We usually give up the right to participate in education at puberty so as to prepare us for adult life, for example, without being informed that our bodies and intelligence can also grow old (B 1165–70). ## **Socrates's defence of democracy and Socratic method** Following a lengthy description of Socrates's educational programme, Plato moves directly into the matter of democratic politics and citizenship in Book III. On the first day of the trial, Socrates takes up the theme of democracy once again: "For you are looking for this thing, my friends, that is to say, the good citizenship to which every person stands entitled" (389). Before continuing, Socrates introduces three principles that he believes form the very heart of good citizenship: the good gods, respect for nature, and love of beauty. Socrates describes these principles in various ways: 1. All citizens of a democracy are expected to behave honourably (390). The citizen should avoid doing anything harmful (to others or to himself) and everything good. There is therefore no way to avoid acting dishonourably (391); but no one can avoid harming himself, for his actions will harm the community as a whole (392–5). 2. Each individual is equally in a position of power and authority, and this means that the citizens must share responsibility for the good government of the state (395). 3. Respect for nature means that citizens will observe that both laws of nature and the opinions of other people control their actions, so that they must choose between the best available alternatives. Anyone who fails to adopt reasoned opinion will be wrong in principle (399). This entails that citizens will have to choose among the best policies that prevail within the community (ibid.). So, while the citizens will have authority and power, this only exists so long as the laws and opinions of which they approve prevail in general over those of which they disapprove. The only way they can get any power at all over their fellow-citizens is either through punishment, or through elections. These provide the means by which citizens can express their approval of a policy or disapproval of a policy. The latter occurs when citizens elect the individuals responsible for making the laws. While democracy may be described as a'mixed' government, it is not possible for citizens to choose those whom they wish to vote for (399). Instead, they decide who should have a voice. Those elected speak for themselves, they do not listen to the advice of their colleagues, and ultimately the result will be chosen by the people themselves (399–401). Once again, Socrates is clearly trying to convince his interrogators that the best interests of the city-state depend on giving a larger voice to the public in running its affairs. ## **Plato's reply to Socrates** Plato's rejoinder shows his great skill in dialogue. He presents the argument in familiar forms: analogy, discussion, and so on. Although Socrates makes some valid points at times along the way, he usually finds reasons for disagreeing with the arguments that he offers to support his claims. As he repeatedly does throughout Book II, the Stranger then uses Socrates's own words against him. To begin with, the Stranger dismisses the claim that each person ... """ ~~~~ --- # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. * The tokenizer has a vocabulary size of 65,536. * It uses *byte fallback* to decompose unknown text pieces into UTF-8 byte pieces to avoid producing `<UNK>` tokens. * It can recognize *consecutive whitespaces*, *newlines*, and *tabs* to handle structured texts better. * We turned off the default behaviour of prepending leading whitespace because it is not beneficial for processing Japanese. * Specifically, single whitespace is always processed as one token so that any English word won't have a preceding whitespace like in many other tokenizers (e.g. `_Hello`). * This decision trades the English processing efficiency for a unified way to treat whitespaces. * It leads to a significantly lower loss of next token prediction on English data because whitespaces are easy to predict. * **Don't forget to set `use_fast=False` to make the above features function correctly.** --- # How to cite ```bibtex @misc{rinna-bilingual-gpt-neox-4b-8k, title = {rinna/bilingual-gpt-neox-4b-8k}, author = {Zhao, Tianyu and Wakatsuki, Toshiaki and Kaga, Akio and Mitsuda, Koh and Sawada, Kei}, url = {https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k} } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, pages = {13898--13905}, url = {https://aclanthology.org/2024.lrec-main.1213}, note = {\url{https://arxiv.org/abs/2404.01657}} } ``` --- # Licenese [The MIT license](https://opensource.org/licenses/MIT)
{"base_model": "rinna/bilingual-gpt-neox-4b", "datasets": ["mc4", "cc100", "wikipedia", "EleutherAI/pile", "togethercomputer/RedPajama-Data-1T"], "language": ["ja", "en"], "license": "mit", "thumbnail": "https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png", "inference": false}
dataset
null
589
Free-Law-Project/modernbert-embed-base_finetune_8192
Free-Law-Project
sentence-similarity
[ "sentence-transformers", "safetensors", "modernbert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:351", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:nomic-ai/modernbert-embed-base", "base_model:finetune:nomic-ai/modernbert-embed-base", "license:cc0-1.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-03-05T19:07:29Z
2025-03-18T00:06:11+00:00
36
0
--- base_model: nomic-ai/modernbert-embed-base language: - en library_name: sentence-transformers license: cc0-1.0 metrics: - cosine_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:351 - loss:MultipleNegativesRankingLoss widget: - source_sentence: 'Rose, J. Appeal from a judgment of the Supreme Court ( Malone, Jr., J. ), entered June 13, 2001 in Albany County, which partially granted petitioner ’ s application, in a proceeding pursuant to CPLR article 78, to review a determination of the Department of Health reducing a component of its Medicaid reimbursement rate. Petitioner, a residential health care facility operating in Chemung County, commenced this proceeding seeking, inter alia, annulment of respondents ’ determination adjusting its case mix index based on misclassifications revealed in an audit of patient review instrument data conducted by the Department of Health ( hereinafter Department ) and recalculating petitioner ’ s Medicaid reimbursement rate for the period beginning April 1, 1999. 1 Specifically, the Department found that petitioner had improperly classified 28 patients as receiving restorative therapy rather than maintenance therapy, reduced petitioner ’ s reimbursement rate accordingly, and directed that future patient assessments be performed by an independent auditor. Petitioner argued that the Department ’ s nurse - auditors had improperly “ second - guessed ” the physician - prescribed rehabilitative care plans for its patients by denying reimbursement even though petitioner had provided restorative therapy as prescribed. Petitioner also argued that the Department acted arbitrarily and capriciously in using a fixed general rule precluding reimbursement for restorative therapy unless it produces actual improvement ( hereinafter actual improvement standard ) that has not been properly adopted and filed as a formal regulation. Supreme Court accepted this latter argument, granted the petition and remitted the matter to respondents to review the patient classifications without re * 773course to the actual improvement standard. Respondents now appeal. 2 Respondents argue that Supreme Court ’ s ruling was improper because the Department ’ s actual improvement standard is based on a rational interpretation of an existing regulation and, thus, is not an unfiled rule. Petitioner reiterates its contentions that the denial of reimbursement for restorative therapy provided to its patients was improper both because it was based on an auditor ’ s after - the - fact medical judgment and on an unfiled rule requiring actual improvement. Since the Department ’ s auditors were not required to defer to the judgments of petitioner ’ s physicians and therapists in retrospectively reviewing what patient care qualified for Medicaid reimbursement ( see, Concourse Rehabilitation & Nursing Ctr. v DeBuono, US Dist Ct, SD NY, June 11, 1988, Conti, J., slip op at 12, appeal dismissed 179 F3d 38 ), we find no merit in petitioner ’ s first contention. Rather, as considered by Supreme Court and as presented on appeal, the central issue is whether respondents ’ actual improvement standard for the restorative therapy classification is a rational interpretation of an existing regulation or a new unfiled rule being applied in violation of the State Administrative Procedure Act. Under 10 NYCRR 86 - 2. 30 ( i ) ( Instructions : Patient Review Instrument [ PRI ] [ 27 ] ), a restorative therapy classification is proper where “ [ t ] here is positive potential for improved functional status within a short and predictable period of time ” and the “ [ t ] herapy plan of care and progress notes * * * support that [ the ] patient has this potential / is improving. ” In its clarification sheet provided to nursing homes, the Department explains that the phrase “ has this potential / is improving ” means that the patient must demonstrate both the potential for functional improvement and the actual occurrence of such improvement in order to qualify for the restorative therapy classification. On this appeal, the Department acknowledges that it has a fixed policy of applying the quoted regulation in this manner. Contrary to Supreme Court ’ s conclusion, we find that the Department ’ s clarification sheet is interpretive, that its interpretation has a rational basis and that, therefore, the resulting actual improvement standard does not constitute an improper unfiled rule ( see, State Administrative Procedure Act * 774 § 102 [ 2 ] [ b ] [ iv ] ; see also, Matter of Dubb Enters. v New York State Liq. Auth., 187 AD2d 831, 833 ; cf, Matter of Cordero v Corbisiero, 80 NY2d 771, 772 - 773 ; Matter of Stuyvesant Polyclinic v Axelrod, 117 AD2d 99, 101 ). Generally, “ courts will defer to an agency ’ s interpretation of its own regulations if not irrational ” ( Matter of Silver Lake Nursing Home v Axelrod, 156 AD2d 789, 790 ; see, Matter of Marzec v DeBuono, 95 NY2d 262, 266 ; Matter of County of Rockland v Axelrod, 157 AD2d 960, 961 ), and the agency ’ s interpretation is not rendered irrational simply because the regulation may be susceptible to a different rational interpretation ( Matter of Jennings v New York State Off. of Mental Health, 90 NY2d 227, 239 ). Petitioner focuses on the role played by the forward slash or virgule in the phrase “ patient has this potential / is improving. ” Arguing that common usage reflects that the virgule merely means “ or, ” petitioner concludes that the Department ’ s requirements of potential improvement and actual improvement contradicts the language of the regulation. Our view of the use of the virgule in the regulation at issue here leads to a contrary conclusion. “ Virgule ” has been defined as a symbol used to denote, inter alia, “ or ” or “ and or ” ( see, Webster ’ s Third New International Dictionary 2555 [ unabridged 1986 ], cross - referencing “ diagonal, ” Webster ’ s Third New International Dictionary 622 [ unabridged 1986 ] ). Even defined in this way, the virgule allows for usage as “ and, ” resulting in no contradiction when both alternatives apply. However, “ virgule ” is more comprehensively defined as “ a short oblique stroke ( / ) between two words indicating that whichever is appropriate may be chosen to complete the sense of the text in which they occur ” ( Random House Dictionary of the English Language 2125 [ unabridged 2d ed 1993 ] ). This definition is particularly apt here because the phrase “ patient has this potential / is improving ” follows, and is parallel to, the preceding phrase “ therapy plan of care and progress notes. ” To interpret the entire regulation, rather than parse the latter phrase only, it is rational to view the virgule as indicating that the reader should use the words that most appropriately complete the sense of the whole sentence. As the earlier phrase has two concepts with one anticipating future progress and the other reporting actual progress, the phrase “ patient has this potential / is improving ” provides the choice between potential and actual circumstances depending upon whether a plan for a patient or a patient ’ s progress is being considered. Interpreted this way, the regulation requires a therapy plan to set forth the patient ’ s potential for improvement and the patient ’ s prog * 775ress notes to reflect actual improvement in order to qualify as restorative. Such an interpretation is also consistent with the overall regulatory scheme, for it seeks to assure that restorative therapy is utilized when it potentially will result in patient improvement while excluding reimbursement if the expected improvement is not achieved ( see, Concourse Rehabilitation & Nursing Ctr. v Whalen, 249 F3d 136, 143 - 146 ). 3 Given the parallel structure of the pertinent phrases of the regulation and the recognized use of the virgule to implement such parallelism, we find no conflict between the cited regulation and respondents ’ interpretation, and conclude that their interpretation has a rational basis. Finally, petitioner ’ s contention that the issue is not judicially reviewable because the Department, through its auditors, did not expressly rely on the actual improvement standard in reclassifying petitioner ’ s patients is belied by the petition itself, which narrowly framed the issue by asserting that the Department ’ s actual improvement standard had resulted in the reclassifications. Accordingly, it was error to grant the petition and require further assessment by the Department. Crew III, J. P., Peters, Mugglin and Lahtinen, JJ., concur. Ordered that the judgment is modified, on the law, without costs, by reversing so much thereof as partially granted the petition ; petition denied in its entirety ; and, as so modified, affirmed. . We refer the reader to Concourse Rehabilitation & Nursing Ctr. v Whalen ( 249 F3d 136 ) for an overview of the Medicaid program and Matter of Teresian House Nursing Home Co. v Chassin ( 218 AD2d 250 ) for a description of its process for auditing patient assessments. . Since the judgment issued by Supreme Court is nonimal and, thus, not appealable as of right ( see, CPLR 5701 [ b ] [ 1 ] ; [ c ] ), we exercise our authority to grant permission to appeal sua sponte given the importance of the issue presented ( see, Matter of Gane v Ambach, 135 AD2d 1013, 1013 - 1014 ). . The Health Care Financing Agency ’ s “ Carriers Manual ” provides as follows : “ Restorative Therapy. To constitute physical therapy a service must, among other things, be reasonable and necessary to the treatment of the individual ’ s illness. * * * In addition, there must be an expectation that the patient ’ s condition will improve significantly in a reasonable ( and generally predictable ) period of time. However, if at any point in the treatment of an illness, it is determined that the expectations will not materialize, the services will no longer be considered reasonable and necessary ; and they, therefore, should be excluded from coverage under § 1862 ( a ) ( 1 ) of the Social Security Act [ 42 USC § 1862 ( a ) ( 1 ) ] ” ( Carriers Manual, part 3, ch II, § 2210. 1 [ emphasis supplied ] ).' sentences: - What are the legal standards for proving legal malpractice in New York? - What are the criteria for granting a motion to dismiss in a criminal trial? - What determines Medicaid reimbursement eligibility for restorative therapy in New York? - source_sentence: 'Bacon, J. The grounds on which the plaintiffs ask the relief to which they suppose themselves entitled are two fold. First, they allege that the proceedings of the defendants are calculated to do incalculable injury to the farms of the plaintiffs, by cutting off and drying up their springs, and destroying the growth of their young timber, and that these proceedings are conducted in bad faith and with the intent to injure the plaintiffs, and benefit the lands of other parties not contributing to the expense of the work ; and secondly, they insist that the act under which the defendants are assuming to perform the work in question is unconstitutional and void, as depriving the plaintiffs of their property, not for any public use, and without providing them a just compensation therefor. I shall spend no time upon the first branch of the plaintiffs ’ case, because there is no evidence whatever before me tending to show that the defendants are acting in bad faith ; and although there is some diversity of opinion whether the mode adopted by the defendants is the one best calculated to secure the result at which they are aiming, and whether the manner of its execution is the most judicious, yet this may be deemed at best a balanced question, on the evidence. Even if they err in judgment, a court would hardly be justified in interfering by the summary process of injunction to restrain their proceedings. Unless the defendants are violating the plain and manifiest intent and object of the statute under which they are acting, or are proceeding in bad faith, the court should not interpose its a, u * 168thority to suspend the work. In either aspect, I see no sufficient ground, as disclosed by the evidence, to entitle the plaintiff to the relief they ask under the first head of their complaint. The more important question, as it was the one most elaborately and ably argued by the counsel on both sides, respects the inquiry whether the act of April 16th, 1854, under which the defendants are carrying on the work of draining, the Rome swamp, is not a violation of the constitution, and therefore void. The plaintiffs ’ counsel insists that the act is a violation of the constitutional inhibition against taking private property, because, ( 1. ) It is not taken for a public use ; and ( 2. ) Because no just compensation is provided for the parties whose property is taken. I. That the property of A. cannot be taken and appropriated to the use of B., however beneficial the change may bej and that the land of private citizens cannot be occupied by the government or any subordinate functionary clothed with legislative authority, under the pretense or the claim of improving it for the benefit of the occupant or his neighbors, requires no argument to demonstrate. It is by no means easy, however, to define the precise boundaries which limit the right to appropriate private property for public use ; or, in other words, to determine when the use shall be deemed public, and when not. It is insisted by the counsel for the plaintiffs that the purposes for which the property is taken in this case are not public, because the benefit is limited to, - and the expense assessed upon, a few individuals. But how are we to determine the number to whom the benefit will be confined? In the case of draining an extensive swamp, we can readily conceive that the public health may be favorably affedted, throughout a wide region, within and bordering upon the district where the work is carried on, and it surely is for the public benefit that a large tract of land should be reclaimed from the condition of a useless morass, and added to the agricultural resources of the state. But the question returns upon us, who is to judge of the degree of necessity which exists, and which alone will warrant the action of the legislative authority in determining that private property may * 169be taken for public uses? It is now well settled, if there ever has been any well founded doubt upon the proposition, that the right of “ eminent domain ” remains in the government, or in the aggregate body of the people in their sovereign capacity, and they have the right to resume the possession in the manner directed by the organic and the statute laws of the state, whenever the public interest requires it. The answer to the question I have proposed, is perhaps no where better given than by the late chancéllor of this state in the leading case of Beekman v. The Saratoga & Schenectady Rail Road Co. ( 3 Paige, 73. ) “ If the public interest can in any way be promoted by the taking of private property, it must rest in the wisdom of the legislature to determine whether the benefit to the public will be of sufficient importance to render it expedient for them to exercise the right of eminent domain, and to authorize an interference with the private rights of individuals for that purpose. ” He adds, “ upon this principle, not only the agents of government, but also individuals and corporate bodies, have been authorized to take private property for the purpose of making public highways, turnpike roads and canals, of erecting and constructing wharves and basins, of establishing ferries, of draining sioamps and marshes, and of bringing water to cities and villages. In all such cases the object of the legislative '' grant of power is the public benefit derived from the contemplated improvement. ” The use and benefit is not required to be universal, nor, in the largest sense, even general. If it is confined to a specific district, it may still be public. If some parties are more benefited than others, this forms no objection to the use, if the public interest and convenience are thereby subserved. Isolated and individual action will rarely secure the public and general result which the legislative power is invoked to accomplish ; and, in view of all the facts in this case, it is to be assumed that the legislature adjudged that the public necessity or utility justified the exercise of the right of resumption, and that the exigency existed which authorized the act in question. I do not say that a case may not exist of such palpable and gross invasion of private rights, unjustified by any semblance of pub - * 170lie necessity, that it would he the duty of the courts to interfere for the protection of such rights, by pronouncing the act a violation of the salutary principle which was designed to hold the legislative authority in check. But the case must be very clear to warrant this interference. On this part of the case, it is pertinent also to remark, that for the last fifty years, at least, the legislature has exercised the power in question here, by passing laws from time to time, authorizing, in various forms, the draining of swamps and marshes, and the reclaiming of submerged lands. More than twenty such acts will be found in the session laws of the state, commencing as early as 1804, and continuing at various intervals down to the very last session of the legislature, when the act in question was passed. This course of legislation is by no means conclusive when a constitutional question arises, which may never have been agitated in the courts, - under any of those acts. And we have been admonished by more than one decision that no length of time, in which a course of legislation has been continued, will protect any law from the condemnation of the judicial tribunals, when its conflict with the constitution is brought distinctly to the test. ( See opinion of Bronson, J. in Taylor v. Porter, 4 Hill, 140. ) While, therefore, it is not affirmed that. these acts may be appealed to as decisive of the power of the legislature to pass them, and that they are not within the constitutional objection we have been considering, they nevertheless do lend some strength to the argument that a power so long exercised, in such diversified forms and various localities, may be deemed settled, as applied to the subject we are now considering. Looking then at the principle which lies at the foundation of the right of the government to take private property for public use by an appropriate act of legislation, and the end which in this case may be fairly deemed the object and intent of the act, I shall - find no difficulty in maintaining it as the lawful exercise of the right of eminent domain, and holding that the taking of the lands of these plaintiffs, so far as it was necessary to enter upon and appropriate them for the purpose intended in this case, was and is a lawful taking of the same for a public use. • * 171II. But there is an important condition connected with the exercise of this power on the part of the government to take private property for the public use ; and that is, the necessity of providing a just compensation to the parties whose property shall be thus appropriated. This condition is fundamental and imperative, and can only be satisfied by making such a provision as shall be in truth “ just, ” or, in other words, adequate and compensatory. “ The principle, ” says Oh. J. Savage, ( Matter of Canal street, 11 Wend. 154, ) “ that private property shall not be taken for public use without just compensation is found in the constitution and laws of this state, and has its foundation in those elementary principles of equity and justice which lie at the root of the social compact. ” And this provision must be made cotemporaneously with, and as a part of, the act which authorizes the appropriation : For, in the language of Oh. Walworth, ( 18 Wend. 17, ) “ Before the legislature can authorize the agents of the state and others to enter upon and occupy, or destroy or materially injure, the private property of an individual, except in case of actual necessity " which will not admit of delay, an adequate and certain remedy must be provided, whereby the owner of such property may compel the payment of his damages or compensation, and he is not bound to trust to the justice of the government to make provision for such compensation by future legislation. ” And Kent, ( 2 Com. 389, ) recognizes the same doctrine when he says, “ a provision for compensation is a necessary attendant on the due and constitutional exercise of the power given to deprive an individual of his property without his consent, and the principle is founded in natural equity, and is laid down by jurists, as an acknowledged principle of universal law. ” Bearing these principles in mind, and that by the term “ just compensation, ” as used in the constitution, is to be understood “ a fair equivalent in money — a quid pro quo, a recompense in value for the property taken, ” ( Per Mason, senator, 18 Wend. 35 ; ) and remembering also that when private " property is taken for public use by right of eminent domain, it is taken not as the owner ’ s share of contribution to a public burthen, but as so much * 172beyond bis share — let us see whether the act of the legislature, under which the proceedings of the defendants in this case have been taken, fulfills the constitutional requirement on that subject. By the 3d section of the act of April 17th, ( Session Laws of 1854, p. 1000, ) it is made the duty of the commissioners to assess the costs and expenses of the survey and the cutting of the ditches, and to apportion the amount among the several owners of lands to be drained, according to the number of acres respectively owned by each. This provision, it will be seen, devolves the whole expenses upon the parties owning the lands to be drained ; and that not in the ratio of relative benefit, but simply upon a property basis, and by an equal assessment upon every acre throughout the swamp. The rule is highly objectionable in respect to the mode of providing for the expenses, but is probably within the scope of the legislative discretion as one form of the exercise of the taxing power. These burthens never can be very equally adjusted, and there is no glaring injustice in requiring those persons to pay the expenses, who are supposed to receive an equivalent in the enhanced value of their own adjacent property. On examining the act further, to ascertain what provision has been made for the damages or compensation to be made to the owner whose lands are entered upon and taken, we find the 11th section declares, that for any damages done to the owner or owners of such lands, ( fee., the commissioners shall make just compensation ; and after providing for their appraisal in the proper mode, it is declared that such damages, and the costs of assessment and the per diem > of the commissioners, shall be duly certified and “ assessed and collected as part of the expenses of the drainage authorized by this act. ” The effect of the provision is to make the damages or compensation to be collected and payable precisely as the expenses are, to wit, by assessing the same upon the owners of the land, according to the number of acres owned by each. But is this the “ just compensation ” contemplated and required by the constitution? Most obviously, it seems to me, it is not. The taking of land necessary for the work, and the dispossession of the owner ’ s right and title thereto, is only to be vindicated on the ground '' * 173that it is required for a public use. If the improvement is required for the public benefit, upon what principle can the public benefited by the appropriation, be exempted from their proper contribution to the just indemnification of the parties whose property has been taken? The land appropriated is not the owner ’ s share of a contribution to a public burthen, but is so much above and beyond his share. He should be compensated, therefore, and the compensation should be made in good part, if not entirely, by those who are benefited by the work accomplished, either in the increased salubrity of the surrounding region, or the enhanced value of the lands which lie in the immediate neighborhood. But by the operation of this section, the owner not only loses his land, but is compelled to pa. y a portion of the very damages he has sustained by such loss and the other consequential injuries he may have suffered thereby. The money which is supposed to satisfy the damages suffered by the owner may, in one sense, be said to find its way into one of the pockets of the proprietor ; but to accomplish that trick of legal legerdemain, it must first be taken out of the other. Is this the “ just compensation ” the constitution contemplates? Does it practically do any more than “ Keep the word of promise to the ear, To break it to the hope. ” Besides, the burthen will of necessity be very unequally apportioned among those who are doomed to bear it. It is incredible that every owner of land in the swamp will suffer equal injury and receive equal benefit from the work in question ; and the testimony in this case shows that such is not the fact. A. is the owner of 20 acres, which is a mere morass, having no available springs upon it, and no growth of timber which the progress of the work uproots and destroys. B., on an adjoining lot, has. both springs indispensable for the uses to which he is applying his already partially reclaimed land and a growth of young timber, very valuable for farming purposes. And yet, under the law as it stands, B. pays precisely at the same rate, as a compensation towards the damages he has suffered, that A. does, who has not only suffered no injury, but has been greatly benefited by * 174the appropriation of the land and the execution of the work. This clearly is no just compensation, but a most inequitable distribution of the burthens, which ought to be in some proximate proportion to the benefits. It is urged by the counsel of the defendants that the act in question follows the precedents of prior legislation on the same subject, and is formed on the model of various acts which have authorized similar works. I have looked through most of the acts on this subject in our session laws for many years, and it is true that in " a great majority of cases no provision whatever has been mad § for ascertaining or paying the compensation required to be made. These laws have been probably acquiesced in by the parties who were interested in or affected by them, and no question has been made in the courts, as far as I am aware, respecting their constitutional validity. If there had been, I am unable to see how they could have escaped judicial condemnation. But this has not been the invariable course of legislation on this subject ; for on examining the act of April, 1816, for draining the great marsh on the Caneseraga creek, I find an express provision, that in case any person shall suffer injury or damage by occasion of the canal and drainage of the land, his damages shall be ascertained by the commissioners, and assessed on the proprietor of such lands “ as would in any wise be benefited or made more valuable, by reason of the canal ” to be cut for the purpose of draining the said swamp. And the same provision was made in reference to the expenses, which were to be assessed in like manner, “ having reference to the benefit to be received by each of the proprietors. ” So also in the act of April, 1825, for draining the Cayuga marshes, it was made the duty of the commissioners, when the work should be completed, to prepare an assessment roll and valuation of the land reclaimed, and all other lands which in their opinion shall have been increased in value by the lowering of the waters of the marsh, and assess a tax to pay for the work, “ in an equal and just measure according to the valuation in the assessment roll, ” adequate to meet the expenses of the work. And a substantially similar provision is contained in the act of * 175February, 1822, for lowering Onondaga Lake, and draining the marsh lands in the town of Salina. [ Oneida Special Term, December 4, 1854. Bacon, Justice. ] These acts contain the proper provisions, and are, it seems to me, founded on the true principle which ought to govern legislation on the subject of appropriating private property for public uses. Nothing could have been easier than to have inserted in the act we have been considering, a section containing a provision similar to the one found in these acts, to which I have referred, and thus have secured all the benefits which are expected to, and doubtless. will, flow from a judicious discharge of the duties devolved upon these defendants, while it preserved all the constitutional guaranties which have been thrown around the rights of the private citizen. Future legislation may possibly ’ -, even now, remedy this omission, giving validity to what has already been done, but providing for that just indemnity and compensation to which it shall be found the parties are ultimately entitled. But whether this be so or not, the duty of the courts in a case where their interposition is invoked to stay proceedings under a law which violates a glain _ constitutional provision, is clear and imperative, and must be performed. , The plaintiffs are accordingly entitled to the relief demanded in the complaint, restraining the defendants from further proceedings under the act in question. But as the defendants have been charged with a public duty, under the apparent sanction of an act of the legislature, and have acted in entire good faith, the judgment must be without costs against them.' sentences: - What legal principles govern the interpretation of insurance policy conditions for claims and notice requirements? - What are the requirements for obtaining a patent for an invention? - What are the legal principles for determining public use and just compensation under eminent domain? - source_sentence: Order affirmed, with ten dollars * 928costs and disbursements. All concurred, except Kruse, J., who dissented upon the ground that the order for examination appears upon its face to have been made under article 1 of title 3 of chapter 9 of the Code of Civil Procedure. Such an order can only be made by a judge and not by the court. If the. order was incorrectly entered it should have been resettled before the judge who presided at the court that made it. sentences: - When can a court issue a writ of prohibition to stop legal proceedings in a lower court? - What are the tax implications of a property sale in the United States? - What happens if a court order is improperly entered under civil procedure laws? - source_sentence: Loring, J. The defendant operates a private hospital for gain. The plaintiff went there to undergo an operation. She testified that " her physician made the arrangements for [ Tier ] entering into the hospital. . . . That she paid to the hospital $ 15 a week for attendance and $ 10 for the use of the operating room. ” The operation was performed by a surgeon not connected with the defendant hospital. The plaintiff was etherized by her family physician and he was not connected with the defendant. In addition to the surgeon and the family physician two of the defendant ’ s nurses were present at the operation. When the plaintiff was on the operating table before she went under ether she had two rings on her hands. After the operation and while the plaintiff was still under the effects of ether she was carried from the operating room to her own room in the hospital by “ one of the doctors assisted by the nurses. ” When the plaintiff came out of the ether she noticed that the more valuable of the two rings ( a ring which “ would not come off without assistance ” ) was missing. At the trial the plaintiff put the surgeon and the family physician on the witness stand. Each of them testified that he did not take the ring. The defendant put on the stand the superintendent of the hospital, one of the two operating nurses and the plaintiff ’ s day nurse. Each of them testified that she did not take the ring. The operating nurse who was put upon the witness stand testified that the other operating nurse was in California “ the last time she heard from ” her. The plaintiff made many requests for rulings and now insists upon the first, fifth, eleventh and twelfth set forth above. These were refused and an exception taken. The judge instructed the jury that to recover the plaintiff must prove that she was in the exercise of due care and that the defendant was negligent. An exception was taken to this ruling. The case is here on these exceptions. * 136On the evidence the jury were warranted in finding that the ring was forcibly removed from the plaintiff ’ s hand by the operating nurse who when last heard from was in California. If the absent nurse did steal the ring it is plain that the defendant is not liable on the ground that in stealing the ring the nurse was acting within the scope of her employment as a servant of the defendant. The first request for ruling therefore was properly refused. If the plaintiff had stood in the relation of a stranger to the defendant there would have been no error in the. trial. But the plaintiff did not stand to the defendant in the relation of a stranger. It is apparent from, the bill of exceptions that the case was not tried on the footing that the rights of the plaintiff in this action depended upon the contract made by her with the defendant. For this reason the terms of this contract do not appear as fully as they otherwise would have done. But from what does appear in the bill of exceptions the presiding judge was wrong in telling the jury that the defendant ’ s liability depended upon the plaintiff proving that it was negligent. . Under the contract entered into by the defendant corporation it was its duty not only ( 1 ) to give the plaintiff a room in the hospital before and after the operation and ( 2 ) to give her surgeon and family physician the use of the operating room for the operatian, but also ( 3 ) to give to the plaintiff the services of such nurses as were necessary for her care before, after and during the operatian. It expressly appeared at the trial that “ she [ the plaintiff ] paid to the hospital $ 15 a week for attendance. ” The services of the nurses which under the contract the defendant was bound to furnish the plaintiff included the services of nurses while she was unconscious from the effects of the ether, a condition which was a necessary part of the operation. And the question we have to decide is whether there was a violation of duty on the part of the defendant under this contract if the operating nurse in question stole the ring by forcibly pulling it off the plaintiff ’ s finger while she was under the effects of ether, or whether on the facts appearing at the trial the jury could have so found. We are of opinion that the jury could have so found. If for example a stranger had burst into the operating room, attacked the plaintiff and done her bodily harm or had attacked * 137the plaintiff while the nurses were carrying her from the operating room to her own room and the defendant ’ s nurses had stood by and done nothing to protect the plaintiff from those attacks, it is plain in our opinion that there would have been a violation of the duty owed by the defendant under its contract with the plaintiff. It is equally plain in our opinion that the duty owed by the defendant under its contract with the plaintiff extended to the care of the rings on her fingers while she was unconscious from the effects of ether as well as to the security of her person. And finally it is equally plain in our opinion that there is as much a violation of the duty owed by the defendant under the contract where the attack upon the person or larceny of the ring is committed by one of the defendant ’ s own nurses ( whose duty it was to protect the plaintiff ) as well as in the case where the attack is made by a stranger and the nurses do not undertake to protect her from the attack. In its legal aspects the case is governed by the decision in Bryant v. Rich, 106 Mass. 180. In that case a dispute arose between a passenger on one of the defendant ’ s steamers and one of the defendant ’ s waiters as to whether the passenger had paid for his supper. The plaintiff, a cousin of the passenger in question, made a suggestion to which no exception could have been taken. Whereupon not only the waiter in question but the head steward and the other waiters knocked down the plaintiff and beat him. It was for this assault and battery that the action in Bryant v. Rich was brought. The presiding judge ruled ( in accordance with a request made by the defendant ) that “ there is no evidence that the steward and waiters, in assaulting the plaintiff, were acting within the scope of any authority, or in the discharge of any duty, imposed upon them by the defendants. ” But in spite of this he instructed the jury that the plaintiff was entitled to recover. This ruling was sustained on the ground that as matter of contract the plaintiff as a passenger had the right to receive proper treatment from the defendants and their servants and all of them. This decision has been followed in other cases - of carriers of passengers. Hayne v. Union Street Railway, 189 Mass. 551. Jackson v. Old Colony Street Railway, 206 Mass. 477. Gentile v. Boston Elevated Railway, 217 Mass. 113. In Levins v. New York, New Haven, & Hartford Railroad, 183 Mass. 175, it was held that a case was * 138not made out under this rule where a purse had been accidentally - left on the window sill of the wash room of a car of the defendant company. In Fairbanks v. Boston Storage Warehouse Co. 189 Mass. 419, it was held that it did not apply where an assault was made by an attendant who under the rules of the defendant company accompanied the plaintiff when he went to examine goods stored by him in the warehouse of the defendant. The reason why the rule of Bryant v. Rich did not apply in the case of Fairbanks v. Boston Storage Warehouse Co. was because of the fact that the employee who made the assault was in attendance upon the plaintiff at the time in question for the plaintiff ’ s own purposes. He was not a servant of the defendant to whose services the plaintiff was entitled under his contract with the defendant. The decision in Bryant v. Rich does not depend upon the fact that the defendants in that case were common carriers. The decision would have been the same had the assault and battery occurred on an excursion steamer in place of upon a steamer operated by a common carrier. And the decision would have been the same if the steward and waiters had stolen rings from Bryant ’ s fingers in place of knocking him down as they did. The doctrine of Bryant v. Rich applies whenever there is a contract between the plaintiff and defendant by force of which the defendant is to furnish for the plaintiff ’ s comfort the services of its, the defendant ’ s, employees. Where the injury to the plaintiff is caused by an act of the defendant ’ s servants done in the course of their employment an action may be brought based on negligence of the defendant ’ s servants for which the defendant is liable because the act took place in the course of his servants ’ employment, or an action may be brought in that case based on violation of the duty owed by the defendant to the plaintiff under the contract between the defendant and the plaintiff. But where ( as was the case in Bryant v. Rich and in the case at bar ) the injury done the plaintiff is caused by an act of the defendant ’ s servants outside of the servants ’ duty as employees of the defendant but by an act of the defendant ’ s servants which while not in the course of the servants ’ employment is none the less a violation of the duty owed by the defendant under the defendant ’ s contract with the plaintiff, the only action that can be brought is an action founded upon the duty arising out of the contract. * 139The second count sufficiently sets forth a liability on the part of the defendant for violation of its duty under its contract with the plaintiff. It was held in Bryant v. Rich that “ for a violation of such a contract either by force or negligence, the plaintiff may bring an action of tort, or an action of contract. ” What has been said leaves open the defence which arises out of the testimony that the plaintiff when received into the hospital was asked to put into the custody of the defendant corporation all her “ valuables. ” The defendant ’ s agent who received the plaintiff when she came to. the hospital testified that that request was made to her at that time. The plaintiff on the other hand testified that she was asked to put her money into the custody of the hospital but that she was not asked to put anything else into its custody. If the defendant ’ s evidence is believed, a defence is made out. On the other hand if the plaintiff ’ s evidence on this matter is believed, her rights depend upon the rule of Bryant v. Rich, ubi supra. Exceptions sustained. sentences: - What are the tax implications of operating a private hospital for profit? - What legal principles determine a hospital's liability for the actions of its employees under a contract with a patient? - What are the legal implications of improperly imposed sublet surcharges in cooperative housing disputes? - source_sentence: Welsh, J. This is an action alleging negligence in the operation of a motor vehicle. The case was tried before a jury. A verdict was returned indicating that the defendant was not negligent The issue on appeal is whether the judge erred in failing to instruct the jury in accordance with G. L. c. 89, § 8, ( the general “ right of way ” at intersections ) as well as G. L. c. 89, § 9 ( the duty of a motorist at an intersection governed by a stop sign ). We determine there was no error. The following evidence was adduced at trial. On January 9, 1996, the plaintiff was operating a motor vehicle on Revere Street a public way in Quincy. She testified that she came to a complete stop at a “ stop ” sign at the intersection of Revere Street and Mechanic Street also a public way. A large mound of snow obstructed her view and she was unable to see the intersection. She proceeded out into the intersection and stopped again about half way into the intersection. The passable roadway was narrowed considerably due to the snow banks on the sides of the road. She allowed a white car to pass her and then started up again. She testified that she saw the car operated by the defendant approaching at a speed of 45 miles per hour ; nevertheless she proceeded through the intersection, making a left turn in the path of the oncoming vehicle. The defendant ’ s vehicle struck the left side of the plaintiffs vehicle, with left hand side damage to the defendant ' s vehicle. The defendant testified that the plaintiff did not stop. The jury determined that the defendant was not negligent The court gave comprehensive instructions on the elements of negligence and the duty of care. The court specifically instructed the jury as to the issue of violation of a statute as evidence of negligence, taking pains to explain that the violation, if found, must be a contributing factor to the damage sustained by the plaintiff. See Minnehan v. Hiland, 278 Mass. 518, 523 ( 1932 ). He specifically charged as to the duty to stop at a stop sign as provided by G. L. c. 89, § 9. 2 The plaintiff ’ s quarrel with the judge is that he failed specifically to instruct as she requested regarding G. L. c. 89, § 8, the general duty of care applicable when two motorists arrive at an intersection at approximately the same time. There was no error. G. L. c. 89, § 8 expressly provides that its provisions do not * 138apply when an operator is otherwise directed by a traffic regulatory sign erected and maintained in accordance with the provision of Sec. 2 of Ch. 85 ( which would include “ stop ” signs ). See Canane v. Dandini, 355 Mass. 72, 75 ( 1968 ). G. L. c. 89, § 9 is the statute that is primarily applicable to intersections governed by stop signs. As stated in Canane, one directed to stop by a stop sign may not have the benefit of the general rule if the rule grants him the right of way, until he has complied with the order to stop. After stopping, the operator becomes subject to the general rule and may proceed and thereafter exercise the right of way in accordance with that rule. Id. at 75. However, the operator must proceed into the intersection with due care. Even if the operator has the right of way under c. 89, § 8, that right is subject to the requirement of using due care. Possession of the right of way is only one factor to be considered in deciding whether the operator has fulfilled his duty of due care. Id. at 76. Accordingly, an operator who has stopped at a “ stop ” sign may still be found to be negligent if he proceeds into the intersection without using due care. The duty to exercise due care requires an operator who has halted at a stop sign to behave with reasonable caution before entering the intersection. Even an operator who has stopped at a stop sign and has a “ right of way ” under § 8 may be found to be negligent if he proceeds into the intersection before he can do so with reasonable prudence and with suitable regard for his safety and that of others. Freyermuth v. Lutfy, 376 Mass., 612, 616, N. 3. ( 1978 ). Again, the “ right of way ^ rule in § 8 is not absolute, but is subject to the condition of due care as to its exercise. With these principles in mind, we turn to the judge ’ s charge. At the outset, we observe that it is not required that the judge charge the jury in the precise formulation proposed [ see Poole v. Boston & Main Ry., 216 Mass. 12, 15 ( 1913 ) ] so long as the judge fairly and adequately covers the point in the charge. See Comeau v. Beck, 319 Mass. 17, 10 ( 1946 ) ; Squires v. Fraska, 301 Mass. 474, 476 ( 1938 ). Stated somewhat differently, the denial of requested instruction does not constitute error where the requested instructions were covered substantially in the charge. Pearlin v. Farrell, 356 Mass. 741 ( 1970 ). The judge gave detailed and comprehensive instructions on the concept of negligence in the context of operating of motor vehicles. He explained the duty of a motorist with regard to intersections controlled by stop signs. This explanation included the duty to yield to vehicles in or in close proximity to the intersection. While the instruction did not follow precisely the formulation suggested in the Canane and Freyermuth cases, the judge ’ s instruction properly stressed the duty of due care when proceeding into the intersection governed by the stop sign after having stopped. Appeal dismissed. So ordered. “ Another rule of the road is that every driver approaching a stop sign, shall stop at a clearly marked stop line, and if there is not a stop line, then [ at ] a point nearest the intersecting roadway before entering it After having stopped, the driver shall yield the right of way to every vehicle in the intersection or approaching in [ the ] other roadway so closely as to constitute an immediate hazard during the time when the driver is moving across or within the intersection. ” sentences: - How is rent abatement calculated in cases involving a breach of the warranty of habitability in Section 8 housing? - What are the legal requirements for establishing a valid contract in business law? - What is the legal duty of care for drivers at intersections with stop signs? model-index: - name: modernbert-embed-base trained on triplets results: - task: type: triplet name: Triplet dataset: name: dev type: dev metrics: - type: cosine_accuracy value: 1 name: Cosine Accuracy - type: cosine_accuracy value: 1 name: Cosine Accuracy --- # modernbert-embed-base trained on triplets This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Free-Law-Project/modernbert-embed-base_finetune_8192") # Run inference sentences = [ "Welsh, J. This is an action alleging negligence in the operation of a motor vehicle. The case was tried before a jury. A verdict was returned indicating that the defendant was not negligent The issue on appeal is whether the judge erred in failing to instruct the jury in accordance with G. L. c. 89, § 8, ( the general “ right of way ” at intersections ) as well as G. L. c. 89, § 9 ( the duty of a motorist at an intersection governed by a stop sign ). We determine there was no error. The following evidence was adduced at trial. On January 9, 1996, the plaintiff was operating a motor vehicle on Revere Street a public way in Quincy. She testified that she came to a complete stop at a “ stop ” sign at the intersection of Revere Street and Mechanic Street also a public way. A large mound of snow obstructed her view and she was unable to see the intersection. She proceeded out into the intersection and stopped again about half way into the intersection. The passable roadway was narrowed considerably due to the snow banks on the sides of the road. She allowed a white car to pass her and then started up again. She testified that she saw the car operated by the defendant approaching at a speed of 45 miles per hour ; nevertheless she proceeded through the intersection, making a left turn in the path of the oncoming vehicle. The defendant ’ s vehicle struck the left side of the plaintiffs vehicle, with left hand side damage to the defendant ' s vehicle. The defendant testified that the plaintiff did not stop. The jury determined that the defendant was not negligent The court gave comprehensive instructions on the elements of negligence and the duty of care. The court specifically instructed the jury as to the issue of violation of a statute as evidence of negligence, taking pains to explain that the violation, if found, must be a contributing factor to the damage sustained by the plaintiff. See Minnehan v. Hiland, 278 Mass. 518, 523 ( 1932 ). He specifically charged as to the duty to stop at a stop sign as provided by G. L. c. 89, § 9. 2 The plaintiff ’ s quarrel with the judge is that he failed specifically to instruct as she requested regarding G. L. c. 89, § 8, the general duty of care applicable when two motorists arrive at an intersection at approximately the same time. There was no error. G. L. c. 89, § 8 expressly provides that its provisions do not * 138apply when an operator is otherwise directed by a traffic regulatory sign erected and maintained in accordance with the provision of Sec. 2 of Ch. 85 ( which would include “ stop ” signs ). See Canane v. Dandini, 355 Mass. 72, 75 ( 1968 ). G. L. c. 89, § 9 is the statute that is primarily applicable to intersections governed by stop signs. As stated in Canane, one directed to stop by a stop sign may not have the benefit of the general rule if the rule grants him the right of way, until he has complied with the order to stop. After stopping, the operator becomes subject to the general rule and may proceed and thereafter exercise the right of way in accordance with that rule. Id. at 75. However, the operator must proceed into the intersection with due care. Even if the operator has the right of way under c. 89, § 8, that right is subject to the requirement of using due care. Possession of the right of way is only one factor to be considered in deciding whether the operator has fulfilled his duty of due care. Id. at 76. Accordingly, an operator who has stopped at a “ stop ” sign may still be found to be negligent if he proceeds into the intersection without using due care. The duty to exercise due care requires an operator who has halted at a stop sign to behave with reasonable caution before entering the intersection. Even an operator who has stopped at a stop sign and has a “ right of way ” under § 8 may be found to be negligent if he proceeds into the intersection before he can do so with reasonable prudence and with suitable regard for his safety and that of others. Freyermuth v. Lutfy, 376 Mass., 612, 616, N. 3. ( 1978 ). Again, the “ right of way ^ rule in § 8 is not absolute, but is subject to the condition of due care as to its exercise. With these principles in mind, we turn to the judge ’ s charge. At the outset, we observe that it is not required that the judge charge the jury in the precise formulation proposed [ see Poole v. Boston & Main Ry., 216 Mass. 12, 15 ( 1913 ) ] so long as the judge fairly and adequately covers the point in the charge. See Comeau v. Beck, 319 Mass. 17, 10 ( 1946 ) ; Squires v. Fraska, 301 Mass. 474, 476 ( 1938 ). Stated somewhat differently, the denial of requested instruction does not constitute error where the requested instructions were covered substantially in the charge. Pearlin v. Farrell, 356 Mass. 741 ( 1970 ). The judge gave detailed and comprehensive instructions on the concept of negligence in the context of operating of motor vehicles. He explained the duty of a motorist with regard to intersections controlled by stop signs. This explanation included the duty to yield to vehicles in or in close proximity to the intersection. While the instruction did not follow precisely the formulation suggested in the Canane and Freyermuth cases, the judge ’ s instruction properly stressed the duty of due care when proceeding into the intersection governed by the stop sign after having stopped. Appeal dismissed. So ordered. “ Another rule of the road is that every driver approaching a stop sign, shall stop at a clearly marked stop line, and if there is not a stop line, then [ at ] a point nearest the intersecting roadway before entering it After having stopped, the driver shall yield the right of way to every vehicle in the intersection or approaching in [ the ] other roadway so closely as to constitute an immediate hazard during the time when the driver is moving across or within the intersection. ”", 'What is the legal duty of care for drivers at intersections with stop signs?', 'What are the legal requirements for establishing a valid contract in business law?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:--------| | **cosine_accuracy** | **1.0** | #### Triplet * Dataset: `dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:--------| | **cosine_accuracy** | **1.0** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Free-Law-Project/opinions-synthetic-query-8192 * Size: 351 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 351 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 62 tokens</li><li>mean: 2810.15 tokens</li><li>max: 7455 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 18.93 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 14.86 tokens</li><li>max: 21 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------| | <code>DISTRICT COURT OF APPEAL OF THE STATE OF FLORIDA FOURTH DISTRICT EURICE McGILL, Appellant, v. STATE OF FLORIDA, Appellee. No. 4D17 - 1492 [ August 31, 2017 ] Appeal of order denying rule 3. 850 motion from the Circuit Court for the Seventeenth Judicial Circuit, Broward County ; Paul L. Backman, Judge ; L. T. Case No. 10 - 12523CF10A. Eurice McGill, Lake City, pro se. No appearance required for appellee. PER CURIAM. Affirmed. WARNER, DAMOORGIAN and KUNTZ, JJ., concur. * * * Not final until disposition of timely filed motion for rehearing.</code> | <code>What are the grounds for denying a Rule 3.850 motion in Florida courts?</code> | <code>What are the qualifications to file for an eviction in Florida?</code> | | <code>Twersky v Incorporated Vil. of Great Neck ( 2015 NY Slip Op 02755 ) Twersky v Incorporated Vil. of Great Neck 2015 NY Slip Op 02755 Decided on April 1, 2015 Appellate Division, Second Department Published by New York State Law Reporting Bureau pursuant to Judiciary Law § 431. This opinion is uncorrected and subject to revision before publication in the Official Reports. Decided on April 1, 2015 SUPREME COURT OF THE STATE OF NEW YORK Appellate Division, Second Judicial Department RANDALL T. ENG, P. J. LEONARD B. AUSTIN JEFFREY A. COHEN BETSY BARROS, JJ. 2014 - 07552 ( Index No. 9576 / 12 ) [ * 1 ] Sharon Twersky, respondent, v Incorporated Village of Great Neck, et al., defendants, FHM Mortgage Corp., et al., appellants. Cascone & Kluepfel, LLP, Garden City, N. Y. ( Howard B. Altman of counsel ), for appellants. Isaacson, Schiowitz & Korson, LLP, Rockville Centre, N. Y. ( Jeremy Schiowitz of counsel ), for respondent. DECISION & ORDER In an action to recover damages for personal injurie...</code> | <code>What legal principles determine a property owner's duty to maintain safe conditions for pedestrians?</code> | <code>What are the tax implications of selling a property in New York State?</code> | | <code>951 A. 2d 180 ( 2008 ) Philip S. HORNER v. GOVERNOR, State of New Hampshire and another. No. 2007 - 668. Supreme Court of New Hampshire. Argued March 27, 2008. Opinion Issued : June 19, 2008. * 181 Philip S. Horner, pro se, and Richard E. Samdperil, of Exeter ( Mr. Horner on the brief, and Mr. Samdperil orally ), for the plaintiff. Kelly A. Ayotte, attorney general ( Karen A. Schlitzer, assistant attorney general, on the memorandum of law and orally ), for the defendants. BRODERICK, C. J. The plaintiff, Philip S. Horner, appeals an order of the Superior Court ( Smukler, * 182 J. ) denying his petition for a writ of prohibition to enjoin the State from enforcing RSA 651 - B : 11 ( 2007 & Supp. 2007 ), which mandates the collection of a sex offender registration fee. We affirm. The plaintiff was convicted in 2000 of five counts of felonious sexual assault, see RSA 632 - A : 3 ( 2007 ). Every sex offender and offender against children is required to register with the New Hampshire Divisio...</code> | <code>What determines whether a charge is classified as a tax or a fee under New Hampshire law?</code> | <code>What are the tax implications of forming a non-profit organization in the United States?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Free-Law-Project/opinions-synthetic-query-8192 * Size: 95 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 95 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 73 tokens</li><li>mean: 1723.31 tokens</li><li>max: 7494 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 18.89 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 14.46 tokens</li><li>max: 20 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------| | <code>Mr. Justice Mercur delivered the opinion of the court, November 20th 1882. Both parties claim title to this land under sheriff ’ s sale as the property of James Strouss. The defendant purchased at a sale made in December 1815, the plaintiff at one made in March 1880. The plaintiff seeks to impeach the validity of the first sale * 411on the ground that it was made in fraud of the creditors of Strouss. The law presumes that a public judicial sale is made in good faith. This presumption stands, unless overthrown by clear and satisfactory evidence of fraud or unfair means. The contention was one of fact. Much evidence Avas given bearing on the question, and some of it conflicting. The learned judge submitted the case to the jury in a clear and correct charge. He instructed them that if the sheriff ’ s sale was made with the intention of hindering, delaying or defeating creditors, and the purchaser had knowledge of such, it was null and void, although the full value of the property may have...</code> | <code>What are the legal principles governing fraud and sale validity in sheriff's sales?</code> | <code>What are the legal implications of intellectual property infringement?</code> | | <code>217 N. J. Super. 541 ( 1987 ) 526 A. 2d 290 ALAN C. STAVER, PLAINTIFF, v. MARGARET STAVER, DEFENDANT. Superior Court of New Jersey, Chancery Division Bergen County, Family Part. March 11, 1987. * 543 Donald L. Garber for plaintiff ( Donald L. Garber, attorney ; Michael I. Lubin on the brief ). John Fiorello for defendant ( Feldman, Feldman, Hoffman & Fiorello, attorneys ). SIMON, MARGUERITE T., J. S. C. Plaintiff husband brings this motion seeking to terminate his obligation to pay alimony to defendant pursuant to a judgment of divorce entered September 6, 1974. Defendant wife brings a cross - motion for enforcement of the judgment. At the time of the entry of the final judgment, plaintiff was employed as an ordained minister earning approximately $ 12, 000 a year. The parties entered into a consensual agreement which was incorporated into the judgment. Two pertinent stipulations of the agreement are as follows : ( 1 ) " Said alimony of $ 500 per month shall continue in effect regardle...</code> | <code>Can pension benefits accrued after a divorce be considered as income for modifying alimony payments?</code> | <code>What are the tax implications of forming a limited liability company (LLC)?</code> | | <code>Howard, J. : By the ' will of Byron S. Briggs, which was offered for probate in the Surrogate ’ s Court of Madison county, Harriet 0. Briggs, his wife, was appointed executrix. After the surrogate had overruled certain objections to the probate of the will and announced his conclusion that the will should be admitted to probate, written objections were filed to the issuance of letters testamentary to the widow, on the ground that she had deliberately murdered the testator for the purpose of thwarting any attempt on his part to make another will. The objections were filed by the son of the testator ; and his attitude of opposition to the widow was approved by a granddaughter of the testator. These two persons were descendants of the testator by a former wife. They were legatees under the will and had a statutory right to make objections. ( See Code Civ. Proc. § 2636. ) They stood ready with the witnesses in court and offered to make proof of the serious charges which they had preferred ...</code> | <code>Can someone accused of murdering a testator be appointed as an executor of the will?</code> | <code>What are the tax implications for inheriting property in the United States?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 2 - `per_device_eval_batch_size`: 2 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 2 - `per_device_eval_batch_size`: 2 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Validation Loss | dev_cosine_accuracy | |:------:|:----:|:---------------:|:-------------------:| | -1 | -1 | - | 0.9895 | | 0.5682 | 100 | 0.0288 | 0.9895 | | 1.1364 | 200 | 0.0317 | 1.0 | | 1.7045 | 300 | 0.0166 | 1.0 | | -1 | -1 | - | 1.0 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.49.0 - PyTorch: 2.6.0+cu124 - Accelerate: 1.4.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "BEAR" ]
Non_BioNLP
# modernbert-embed-base trained on triplets This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Free-Law-Project/modernbert-embed-base_finetune_8192") # Run inference sentences = [ "Welsh, J. This is an action alleging negligence in the operation of a motor vehicle. The case was tried before a jury. A verdict was returned indicating that the defendant was not negligent The issue on appeal is whether the judge erred in failing to instruct the jury in accordance with G. L. c. 89, § 8, ( the general “ right of way ” at intersections ) as well as G. L. c. 89, § 9 ( the duty of a motorist at an intersection governed by a stop sign ). We determine there was no error. The following evidence was adduced at trial. On January 9, 1996, the plaintiff was operating a motor vehicle on Revere Street a public way in Quincy. She testified that she came to a complete stop at a “ stop ” sign at the intersection of Revere Street and Mechanic Street also a public way. A large mound of snow obstructed her view and she was unable to see the intersection. She proceeded out into the intersection and stopped again about half way into the intersection. The passable roadway was narrowed considerably due to the snow banks on the sides of the road. She allowed a white car to pass her and then started up again. She testified that she saw the car operated by the defendant approaching at a speed of 45 miles per hour ; nevertheless she proceeded through the intersection, making a left turn in the path of the oncoming vehicle. The defendant ’ s vehicle struck the left side of the plaintiffs vehicle, with left hand side damage to the defendant ' s vehicle. The defendant testified that the plaintiff did not stop. The jury determined that the defendant was not negligent The court gave comprehensive instructions on the elements of negligence and the duty of care. The court specifically instructed the jury as to the issue of violation of a statute as evidence of negligence, taking pains to explain that the violation, if found, must be a contributing factor to the damage sustained by the plaintiff. See Minnehan v. Hiland, 278 Mass. 518, 523 ( 1932 ). He specifically charged as to the duty to stop at a stop sign as provided by G. L. c. 89, § 9. 2 The plaintiff ’ s quarrel with the judge is that he failed specifically to instruct as she requested regarding G. L. c. 89, § 8, the general duty of care applicable when two motorists arrive at an intersection at approximately the same time. There was no error. G. L. c. 89, § 8 expressly provides that its provisions do not * 138apply when an operator is otherwise directed by a traffic regulatory sign erected and maintained in accordance with the provision of Sec. 2 of Ch. 85 ( which would include “ stop ” signs ). See Canane v. Dandini, 355 Mass. 72, 75 ( 1968 ). G. L. c. 89, § 9 is the statute that is primarily applicable to intersections governed by stop signs. As stated in Canane, one directed to stop by a stop sign may not have the benefit of the general rule if the rule grants him the right of way, until he has complied with the order to stop. After stopping, the operator becomes subject to the general rule and may proceed and thereafter exercise the right of way in accordance with that rule. Id. at 75. However, the operator must proceed into the intersection with due care. Even if the operator has the right of way under c. 89, § 8, that right is subject to the requirement of using due care. Possession of the right of way is only one factor to be considered in deciding whether the operator has fulfilled his duty of due care. Id. at 76. Accordingly, an operator who has stopped at a “ stop ” sign may still be found to be negligent if he proceeds into the intersection without using due care. The duty to exercise due care requires an operator who has halted at a stop sign to behave with reasonable caution before entering the intersection. Even an operator who has stopped at a stop sign and has a “ right of way ” under § 8 may be found to be negligent if he proceeds into the intersection before he can do so with reasonable prudence and with suitable regard for his safety and that of others. Freyermuth v. Lutfy, 376 Mass., 612, 616, N. 3. ( 1978 ). Again, the “ right of way ^ rule in § 8 is not absolute, but is subject to the condition of due care as to its exercise. With these principles in mind, we turn to the judge ’ s charge. At the outset, we observe that it is not required that the judge charge the jury in the precise formulation proposed [ see Poole v. Boston & Main Ry., 216 Mass. 12, 15 ( 1913 ) ] so long as the judge fairly and adequately covers the point in the charge. See Comeau v. Beck, 319 Mass. 17, 10 ( 1946 ) ; Squires v. Fraska, 301 Mass. 474, 476 ( 1938 ). Stated somewhat differently, the denial of requested instruction does not constitute error where the requested instructions were covered substantially in the charge. Pearlin v. Farrell, 356 Mass. 741 ( 1970 ). The judge gave detailed and comprehensive instructions on the concept of negligence in the context of operating of motor vehicles. He explained the duty of a motorist with regard to intersections controlled by stop signs. This explanation included the duty to yield to vehicles in or in close proximity to the intersection. While the instruction did not follow precisely the formulation suggested in the Canane and Freyermuth cases, the judge ’ s instruction properly stressed the duty of due care when proceeding into the intersection governed by the stop sign after having stopped. Appeal dismissed. So ordered. “ Another rule of the road is that every driver approaching a stop sign, shall stop at a clearly marked stop line, and if there is not a stop line, then [ at ] a point nearest the intersecting roadway before entering it After having stopped, the driver shall yield the right of way to every vehicle in the intersection or approaching in [ the ] other roadway so closely as to constitute an immediate hazard during the time when the driver is moving across or within the intersection. ”", 'What is the legal duty of care for drivers at intersections with stop signs?', 'What are the legal requirements for establishing a valid contract in business law?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:--------| | **cosine_accuracy** | **1.0** | #### Triplet * Dataset: `dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:--------| | **cosine_accuracy** | **1.0** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Free-Law-Project/opinions-synthetic-query-8192 * Size: 351 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 351 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 62 tokens</li><li>mean: 2810.15 tokens</li><li>max: 7455 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 18.93 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 14.86 tokens</li><li>max: 21 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------| | <code>DISTRICT COURT OF APPEAL OF THE STATE OF FLORIDA FOURTH DISTRICT EURICE McGILL, Appellant, v. STATE OF FLORIDA, Appellee. No. 4D17 - 1492 [ August 31, 2017 ] Appeal of order denying rule 3. 850 motion from the Circuit Court for the Seventeenth Judicial Circuit, Broward County ; Paul L. Backman, Judge ; L. T. Case No. 10 - 12523CF10A. Eurice McGill, Lake City, pro se. No appearance required for appellee. PER CURIAM. Affirmed. WARNER, DAMOORGIAN and KUNTZ, JJ., concur. * * * Not final until disposition of timely filed motion for rehearing.</code> | <code>What are the grounds for denying a Rule 3.850 motion in Florida courts?</code> | <code>What are the qualifications to file for an eviction in Florida?</code> | | <code>Twersky v Incorporated Vil. of Great Neck ( 2015 NY Slip Op 02755 ) Twersky v Incorporated Vil. of Great Neck 2015 NY Slip Op 02755 Decided on April 1, 2015 Appellate Division, Second Department Published by New York State Law Reporting Bureau pursuant to Judiciary Law § 431. This opinion is uncorrected and subject to revision before publication in the Official Reports. Decided on April 1, 2015 SUPREME COURT OF THE STATE OF NEW YORK Appellate Division, Second Judicial Department RANDALL T. ENG, P. J. LEONARD B. AUSTIN JEFFREY A. COHEN BETSY BARROS, JJ. 2014 - 07552 ( Index No. 9576 / 12 ) [ * 1 ] Sharon Twersky, respondent, v Incorporated Village of Great Neck, et al., defendants, FHM Mortgage Corp., et al., appellants. Cascone & Kluepfel, LLP, Garden City, N. Y. ( Howard B. Altman of counsel ), for appellants. Isaacson, Schiowitz & Korson, LLP, Rockville Centre, N. Y. ( Jeremy Schiowitz of counsel ), for respondent. DECISION & ORDER In an action to recover damages for personal injurie...</code> | <code>What legal principles determine a property owner's duty to maintain safe conditions for pedestrians?</code> | <code>What are the tax implications of selling a property in New York State?</code> | | <code>951 A. 2d 180 ( 2008 ) Philip S. HORNER v. GOVERNOR, State of New Hampshire and another. No. 2007 - 668. Supreme Court of New Hampshire. Argued March 27, 2008. Opinion Issued : June 19, 2008. * 181 Philip S. Horner, pro se, and Richard E. Samdperil, of Exeter ( Mr. Horner on the brief, and Mr. Samdperil orally ), for the plaintiff. Kelly A. Ayotte, attorney general ( Karen A. Schlitzer, assistant attorney general, on the memorandum of law and orally ), for the defendants. BRODERICK, C. J. The plaintiff, Philip S. Horner, appeals an order of the Superior Court ( Smukler, * 182 J. ) denying his petition for a writ of prohibition to enjoin the State from enforcing RSA 651 - B : 11 ( 2007 & Supp. 2007 ), which mandates the collection of a sex offender registration fee. We affirm. The plaintiff was convicted in 2000 of five counts of felonious sexual assault, see RSA 632 - A : 3 ( 2007 ). Every sex offender and offender against children is required to register with the New Hampshire Divisio...</code> | <code>What determines whether a charge is classified as a tax or a fee under New Hampshire law?</code> | <code>What are the tax implications of forming a non-profit organization in the United States?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Free-Law-Project/opinions-synthetic-query-8192 * Size: 95 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 95 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 73 tokens</li><li>mean: 1723.31 tokens</li><li>max: 7494 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 18.89 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 14.46 tokens</li><li>max: 20 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------| | <code>Mr. Justice Mercur delivered the opinion of the court, November 20th 1882. Both parties claim title to this land under sheriff ’ s sale as the property of James Strouss. The defendant purchased at a sale made in December 1815, the plaintiff at one made in March 1880. The plaintiff seeks to impeach the validity of the first sale * 411on the ground that it was made in fraud of the creditors of Strouss. The law presumes that a public judicial sale is made in good faith. This presumption stands, unless overthrown by clear and satisfactory evidence of fraud or unfair means. The contention was one of fact. Much evidence Avas given bearing on the question, and some of it conflicting. The learned judge submitted the case to the jury in a clear and correct charge. He instructed them that if the sheriff ’ s sale was made with the intention of hindering, delaying or defeating creditors, and the purchaser had knowledge of such, it was null and void, although the full value of the property may have...</code> | <code>What are the legal principles governing fraud and sale validity in sheriff's sales?</code> | <code>What are the legal implications of intellectual property infringement?</code> | | <code>217 N. J. Super. 541 ( 1987 ) 526 A. 2d 290 ALAN C. STAVER, PLAINTIFF, v. MARGARET STAVER, DEFENDANT. Superior Court of New Jersey, Chancery Division Bergen County, Family Part. March 11, 1987. * 543 Donald L. Garber for plaintiff ( Donald L. Garber, attorney ; Michael I. Lubin on the brief ). John Fiorello for defendant ( Feldman, Feldman, Hoffman & Fiorello, attorneys ). SIMON, MARGUERITE T., J. S. C. Plaintiff husband brings this motion seeking to terminate his obligation to pay alimony to defendant pursuant to a judgment of divorce entered September 6, 1974. Defendant wife brings a cross - motion for enforcement of the judgment. At the time of the entry of the final judgment, plaintiff was employed as an ordained minister earning approximately $ 12, 000 a year. The parties entered into a consensual agreement which was incorporated into the judgment. Two pertinent stipulations of the agreement are as follows : ( 1 ) " Said alimony of $ 500 per month shall continue in effect regardle...</code> | <code>Can pension benefits accrued after a divorce be considered as income for modifying alimony payments?</code> | <code>What are the tax implications of forming a limited liability company (LLC)?</code> | | <code>Howard, J. : By the ' will of Byron S. Briggs, which was offered for probate in the Surrogate ’ s Court of Madison county, Harriet 0. Briggs, his wife, was appointed executrix. After the surrogate had overruled certain objections to the probate of the will and announced his conclusion that the will should be admitted to probate, written objections were filed to the issuance of letters testamentary to the widow, on the ground that she had deliberately murdered the testator for the purpose of thwarting any attempt on his part to make another will. The objections were filed by the son of the testator ; and his attitude of opposition to the widow was approved by a granddaughter of the testator. These two persons were descendants of the testator by a former wife. They were legatees under the will and had a statutory right to make objections. ( See Code Civ. Proc. § 2636. ) They stood ready with the witnesses in court and offered to make proof of the serious charges which they had preferred ...</code> | <code>Can someone accused of murdering a testator be appointed as an executor of the will?</code> | <code>What are the tax implications for inheriting property in the United States?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 2 - `per_device_eval_batch_size`: 2 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 2 - `per_device_eval_batch_size`: 2 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Validation Loss | dev_cosine_accuracy | |:------:|:----:|:---------------:|:-------------------:| | -1 | -1 | - | 0.9895 | | 0.5682 | 100 | 0.0288 | 0.9895 | | 1.1364 | 200 | 0.0317 | 1.0 | | 1.7045 | 300 | 0.0166 | 1.0 | | -1 | -1 | - | 1.0 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.49.0 - PyTorch: 2.6.0+cu124 - Accelerate: 1.4.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "nomic-ai/modernbert-embed-base", "language": ["en"], "library_name": "sentence-transformers", "license": "cc0-1.0", "metrics": ["cosine_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:351", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Rose, J. Appeal from a judgment of the Supreme Court ( Malone, Jr., J. ), entered June 13, 2001 in Albany County, which partially granted petitioner ’ s application, in a proceeding pursuant to CPLR article 78, to review a determination of the Department of Health reducing a component of its Medicaid reimbursement rate. Petitioner, a residential health care facility operating in Chemung County, commenced this proceeding seeking, inter alia, annulment of respondents ’ determination adjusting its case mix index based on misclassifications revealed in an audit of patient review instrument data conducted by the Department of Health ( hereinafter Department ) and recalculating petitioner ’ s Medicaid reimbursement rate for the period beginning April 1, 1999. 1 Specifically, the Department found that petitioner had improperly classified 28 patients as receiving restorative therapy rather than maintenance therapy, reduced petitioner ’ s reimbursement rate accordingly, and directed that future patient assessments be performed by an independent auditor. Petitioner argued that the Department ’ s nurse - auditors had improperly “ second - guessed ” the physician - prescribed rehabilitative care plans for its patients by denying reimbursement even though petitioner had provided restorative therapy as prescribed. Petitioner also argued that the Department acted arbitrarily and capriciously in using a fixed general rule precluding reimbursement for restorative therapy unless it produces actual improvement ( hereinafter actual improvement standard ) that has not been properly adopted and filed as a formal regulation. Supreme Court accepted this latter argument, granted the petition and remitted the matter to respondents to review the patient classifications without re * 773course to the actual improvement standard. Respondents now appeal. 2 Respondents argue that Supreme Court ’ s ruling was improper because the Department ’ s actual improvement standard is based on a rational interpretation of an existing regulation and, thus, is not an unfiled rule. Petitioner reiterates its contentions that the denial of reimbursement for restorative therapy provided to its patients was improper both because it was based on an auditor ’ s after - the - fact medical judgment and on an unfiled rule requiring actual improvement. Since the Department ’ s auditors were not required to defer to the judgments of petitioner ’ s physicians and therapists in retrospectively reviewing what patient care qualified for Medicaid reimbursement ( see, Concourse Rehabilitation & Nursing Ctr. v DeBuono, US Dist Ct, SD NY, June 11, 1988, Conti, J., slip op at 12, appeal dismissed 179 F3d 38 ), we find no merit in petitioner ’ s first contention. Rather, as considered by Supreme Court and as presented on appeal, the central issue is whether respondents ’ actual improvement standard for the restorative therapy classification is a rational interpretation of an existing regulation or a new unfiled rule being applied in violation of the State Administrative Procedure Act. Under 10 NYCRR 86 - 2. 30 ( i ) ( Instructions : Patient Review Instrument [ PRI ] [ 27 ] ), a restorative therapy classification is proper where “ [ t ] here is positive potential for improved functional status within a short and predictable period of time ” and the “ [ t ] herapy plan of care and progress notes * * * support that [ the ] patient has this potential / is improving. ” In its clarification sheet provided to nursing homes, the Department explains that the phrase “ has this potential / is improving ” means that the patient must demonstrate both the potential for functional improvement and the actual occurrence of such improvement in order to qualify for the restorative therapy classification. On this appeal, the Department acknowledges that it has a fixed policy of applying the quoted regulation in this manner. Contrary to Supreme Court ’ s conclusion, we find that the Department ’ s clarification sheet is interpretive, that its interpretation has a rational basis and that, therefore, the resulting actual improvement standard does not constitute an improper unfiled rule ( see, State Administrative Procedure Act * 774 § 102 [ 2 ] [ b ] [ iv ] ; see also, Matter of Dubb Enters. v New York State Liq. Auth., 187 AD2d 831, 833 ; cf, Matter of Cordero v Corbisiero, 80 NY2d 771, 772 - 773 ; Matter of Stuyvesant Polyclinic v Axelrod, 117 AD2d 99, 101 ). Generally, “ courts will defer to an agency ’ s interpretation of its own regulations if not irrational ” ( Matter of Silver Lake Nursing Home v Axelrod, 156 AD2d 789, 790 ; see, Matter of Marzec v DeBuono, 95 NY2d 262, 266 ; Matter of County of Rockland v Axelrod, 157 AD2d 960, 961 ), and the agency ’ s interpretation is not rendered irrational simply because the regulation may be susceptible to a different rational interpretation ( Matter of Jennings v New York State Off. of Mental Health, 90 NY2d 227, 239 ). Petitioner focuses on the role played by the forward slash or virgule in the phrase “ patient has this potential / is improving. ” Arguing that common usage reflects that the virgule merely means “ or, ” petitioner concludes that the Department ’ s requirements of potential improvement and actual improvement contradicts the language of the regulation. Our view of the use of the virgule in the regulation at issue here leads to a contrary conclusion. “ Virgule ” has been defined as a symbol used to denote, inter alia, “ or ” or “ and or ” ( see, Webster ’ s Third New International Dictionary 2555 [ unabridged 1986 ], cross - referencing “ diagonal, ” Webster ’ s Third New International Dictionary 622 [ unabridged 1986 ] ). Even defined in this way, the virgule allows for usage as “ and, ” resulting in no contradiction when both alternatives apply. However, “ virgule ” is more comprehensively defined as “ a short oblique stroke ( / ) between two words indicating that whichever is appropriate may be chosen to complete the sense of the text in which they occur ” ( Random House Dictionary of the English Language 2125 [ unabridged 2d ed 1993 ] ). This definition is particularly apt here because the phrase “ patient has this potential / is improving ” follows, and is parallel to, the preceding phrase “ therapy plan of care and progress notes. ” To interpret the entire regulation, rather than parse the latter phrase only, it is rational to view the virgule as indicating that the reader should use the words that most appropriately complete the sense of the whole sentence. As the earlier phrase has two concepts with one anticipating future progress and the other reporting actual progress, the phrase “ patient has this potential / is improving ” provides the choice between potential and actual circumstances depending upon whether a plan for a patient or a patient ’ s progress is being considered. Interpreted this way, the regulation requires a therapy plan to set forth the patient ’ s potential for improvement and the patient ’ s prog * 775ress notes to reflect actual improvement in order to qualify as restorative. Such an interpretation is also consistent with the overall regulatory scheme, for it seeks to assure that restorative therapy is utilized when it potentially will result in patient improvement while excluding reimbursement if the expected improvement is not achieved ( see, Concourse Rehabilitation & Nursing Ctr. v Whalen, 249 F3d 136, 143 - 146 ). 3 Given the parallel structure of the pertinent phrases of the regulation and the recognized use of the virgule to implement such parallelism, we find no conflict between the cited regulation and respondents ’ interpretation, and conclude that their interpretation has a rational basis. Finally, petitioner ’ s contention that the issue is not judicially reviewable because the Department, through its auditors, did not expressly rely on the actual improvement standard in reclassifying petitioner ’ s patients is belied by the petition itself, which narrowly framed the issue by asserting that the Department ’ s actual improvement standard had resulted in the reclassifications. Accordingly, it was error to grant the petition and require further assessment by the Department. Crew III, J. P., Peters, Mugglin and Lahtinen, JJ., concur. Ordered that the judgment is modified, on the law, without costs, by reversing so much thereof as partially granted the petition ; petition denied in its entirety ; and, as so modified, affirmed. . We refer the reader to Concourse Rehabilitation & Nursing Ctr. v Whalen ( 249 F3d 136 ) for an overview of the Medicaid program and Matter of Teresian House Nursing Home Co. v Chassin ( 218 AD2d 250 ) for a description of its process for auditing patient assessments. . Since the judgment issued by Supreme Court is nonimal and, thus, not appealable as of right ( see, CPLR 5701 [ b ] [ 1 ] ; [ c ] ), we exercise our authority to grant permission to appeal sua sponte given the importance of the issue presented ( see, Matter of Gane v Ambach, 135 AD2d 1013, 1013 - 1014 ). . The Health Care Financing Agency ’ s “ Carriers Manual ” provides as follows : “ Restorative Therapy. To constitute physical therapy a service must, among other things, be reasonable and necessary to the treatment of the individual ’ s illness. * * * In addition, there must be an expectation that the patient ’ s condition will improve significantly in a reasonable ( and generally predictable ) period of time. However, if at any point in the treatment of an illness, it is determined that the expectations will not materialize, the services will no longer be considered reasonable and necessary ; and they, therefore, should be excluded from coverage under § 1862 ( a ) ( 1 ) of the Social Security Act [ 42 USC § 1862 ( a ) ( 1 ) ] ” ( Carriers Manual, part 3, ch II, § 2210. 1 [ emphasis supplied ] ).", "sentences": ["What are the legal standards for proving legal malpractice in New York?", "What are the criteria for granting a motion to dismiss in a criminal trial?", "What determines Medicaid reimbursement eligibility for restorative therapy in New York?"]}, {"source_sentence": "Bacon, J. The grounds on which the plaintiffs ask the relief to which they suppose themselves entitled are two fold. First, they allege that the proceedings of the defendants are calculated to do incalculable injury to the farms of the plaintiffs, by cutting off and drying up their springs, and destroying the growth of their young timber, and that these proceedings are conducted in bad faith and with the intent to injure the plaintiffs, and benefit the lands of other parties not contributing to the expense of the work ; and secondly, they insist that the act under which the defendants are assuming to perform the work in question is unconstitutional and void, as depriving the plaintiffs of their property, not for any public use, and without providing them a just compensation therefor. I shall spend no time upon the first branch of the plaintiffs ’ case, because there is no evidence whatever before me tending to show that the defendants are acting in bad faith ; and although there is some diversity of opinion whether the mode adopted by the defendants is the one best calculated to secure the result at which they are aiming, and whether the manner of its execution is the most judicious, yet this may be deemed at best a balanced question, on the evidence. Even if they err in judgment, a court would hardly be justified in interfering by the summary process of injunction to restrain their proceedings. Unless the defendants are violating the plain and manifiest intent and object of the statute under which they are acting, or are proceeding in bad faith, the court should not interpose its a, u * 168thority to suspend the work. In either aspect, I see no sufficient ground, as disclosed by the evidence, to entitle the plaintiff to the relief they ask under the first head of their complaint. The more important question, as it was the one most elaborately and ably argued by the counsel on both sides, respects the inquiry whether the act of April 16th, 1854, under which the defendants are carrying on the work of draining, the Rome swamp, is not a violation of the constitution, and therefore void. The plaintiffs ’ counsel insists that the act is a violation of the constitutional inhibition against taking private property, because, ( 1. ) It is not taken for a public use ; and ( 2. ) Because no just compensation is provided for the parties whose property is taken. I. That the property of A. cannot be taken and appropriated to the use of B., however beneficial the change may bej and that the land of private citizens cannot be occupied by the government or any subordinate functionary clothed with legislative authority, under the pretense or the claim of improving it for the benefit of the occupant or his neighbors, requires no argument to demonstrate. It is by no means easy, however, to define the precise boundaries which limit the right to appropriate private property for public use ; or, in other words, to determine when the use shall be deemed public, and when not. It is insisted by the counsel for the plaintiffs that the purposes for which the property is taken in this case are not public, because the benefit is limited to, - and the expense assessed upon, a few individuals. But how are we to determine the number to whom the benefit will be confined? In the case of draining an extensive swamp, we can readily conceive that the public health may be favorably affedted, throughout a wide region, within and bordering upon the district where the work is carried on, and it surely is for the public benefit that a large tract of land should be reclaimed from the condition of a useless morass, and added to the agricultural resources of the state. But the question returns upon us, who is to judge of the degree of necessity which exists, and which alone will warrant the action of the legislative authority in determining that private property may * 169be taken for public uses? It is now well settled, if there ever has been any well founded doubt upon the proposition, that the right of “ eminent domain ” remains in the government, or in the aggregate body of the people in their sovereign capacity, and they have the right to resume the possession in the manner directed by the organic and the statute laws of the state, whenever the public interest requires it. The answer to the question I have proposed, is perhaps no where better given than by the late chancéllor of this state in the leading case of Beekman v. The Saratoga & Schenectady Rail Road Co. ( 3 Paige, 73. ) “ If the public interest can in any way be promoted by the taking of private property, it must rest in the wisdom of the legislature to determine whether the benefit to the public will be of sufficient importance to render it expedient for them to exercise the right of eminent domain, and to authorize an interference with the private rights of individuals for that purpose. ” He adds, “ upon this principle, not only the agents of government, but also individuals and corporate bodies, have been authorized to take private property for the purpose of making public highways, turnpike roads and canals, of erecting and constructing wharves and basins, of establishing ferries, of draining sioamps and marshes, and of bringing water to cities and villages. In all such cases the object of the legislative ' grant of power is the public benefit derived from the contemplated improvement. ” The use and benefit is not required to be universal, nor, in the largest sense, even general. If it is confined to a specific district, it may still be public. If some parties are more benefited than others, this forms no objection to the use, if the public interest and convenience are thereby subserved. Isolated and individual action will rarely secure the public and general result which the legislative power is invoked to accomplish ; and, in view of all the facts in this case, it is to be assumed that the legislature adjudged that the public necessity or utility justified the exercise of the right of resumption, and that the exigency existed which authorized the act in question. I do not say that a case may not exist of such palpable and gross invasion of private rights, unjustified by any semblance of pub - * 170lie necessity, that it would he the duty of the courts to interfere for the protection of such rights, by pronouncing the act a violation of the salutary principle which was designed to hold the legislative authority in check. But the case must be very clear to warrant this interference. On this part of the case, it is pertinent also to remark, that for the last fifty years, at least, the legislature has exercised the power in question here, by passing laws from time to time, authorizing, in various forms, the draining of swamps and marshes, and the reclaiming of submerged lands. More than twenty such acts will be found in the session laws of the state, commencing as early as 1804, and continuing at various intervals down to the very last session of the legislature, when the act in question was passed. This course of legislation is by no means conclusive when a constitutional question arises, which may never have been agitated in the courts, - under any of those acts. And we have been admonished by more than one decision that no length of time, in which a course of legislation has been continued, will protect any law from the condemnation of the judicial tribunals, when its conflict with the constitution is brought distinctly to the test. ( See opinion of Bronson, J. in Taylor v. Porter, 4 Hill, 140. ) While, therefore, it is not affirmed that. these acts may be appealed to as decisive of the power of the legislature to pass them, and that they are not within the constitutional objection we have been considering, they nevertheless do lend some strength to the argument that a power so long exercised, in such diversified forms and various localities, may be deemed settled, as applied to the subject we are now considering. Looking then at the principle which lies at the foundation of the right of the government to take private property for public use by an appropriate act of legislation, and the end which in this case may be fairly deemed the object and intent of the act, I shall - find no difficulty in maintaining it as the lawful exercise of the right of eminent domain, and holding that the taking of the lands of these plaintiffs, so far as it was necessary to enter upon and appropriate them for the purpose intended in this case, was and is a lawful taking of the same for a public use. • * 171II. But there is an important condition connected with the exercise of this power on the part of the government to take private property for the public use ; and that is, the necessity of providing a just compensation to the parties whose property shall be thus appropriated. This condition is fundamental and imperative, and can only be satisfied by making such a provision as shall be in truth “ just, ” or, in other words, adequate and compensatory. “ The principle, ” says Oh. J. Savage, ( Matter of Canal street, 11 Wend. 154, ) “ that private property shall not be taken for public use without just compensation is found in the constitution and laws of this state, and has its foundation in those elementary principles of equity and justice which lie at the root of the social compact. ” And this provision must be made cotemporaneously with, and as a part of, the act which authorizes the appropriation : For, in the language of Oh. Walworth, ( 18 Wend. 17, ) “ Before the legislature can authorize the agents of the state and others to enter upon and occupy, or destroy or materially injure, the private property of an individual, except in case of actual necessity \" which will not admit of delay, an adequate and certain remedy must be provided, whereby the owner of such property may compel the payment of his damages or compensation, and he is not bound to trust to the justice of the government to make provision for such compensation by future legislation. ” And Kent, ( 2 Com. 389, ) recognizes the same doctrine when he says, “ a provision for compensation is a necessary attendant on the due and constitutional exercise of the power given to deprive an individual of his property without his consent, and the principle is founded in natural equity, and is laid down by jurists, as an acknowledged principle of universal law. ” Bearing these principles in mind, and that by the term “ just compensation, ” as used in the constitution, is to be understood “ a fair equivalent in money — a quid pro quo, a recompense in value for the property taken, ” ( Per Mason, senator, 18 Wend. 35 ; ) and remembering also that when private \" property is taken for public use by right of eminent domain, it is taken not as the owner ’ s share of contribution to a public burthen, but as so much * 172beyond bis share — let us see whether the act of the legislature, under which the proceedings of the defendants in this case have been taken, fulfills the constitutional requirement on that subject. By the 3d section of the act of April 17th, ( Session Laws of 1854, p. 1000, ) it is made the duty of the commissioners to assess the costs and expenses of the survey and the cutting of the ditches, and to apportion the amount among the several owners of lands to be drained, according to the number of acres respectively owned by each. This provision, it will be seen, devolves the whole expenses upon the parties owning the lands to be drained ; and that not in the ratio of relative benefit, but simply upon a property basis, and by an equal assessment upon every acre throughout the swamp. The rule is highly objectionable in respect to the mode of providing for the expenses, but is probably within the scope of the legislative discretion as one form of the exercise of the taxing power. These burthens never can be very equally adjusted, and there is no glaring injustice in requiring those persons to pay the expenses, who are supposed to receive an equivalent in the enhanced value of their own adjacent property. On examining the act further, to ascertain what provision has been made for the damages or compensation to be made to the owner whose lands are entered upon and taken, we find the 11th section declares, that for any damages done to the owner or owners of such lands, ( fee., the commissioners shall make just compensation ; and after providing for their appraisal in the proper mode, it is declared that such damages, and the costs of assessment and the per diem > of the commissioners, shall be duly certified and “ assessed and collected as part of the expenses of the drainage authorized by this act. ” The effect of the provision is to make the damages or compensation to be collected and payable precisely as the expenses are, to wit, by assessing the same upon the owners of the land, according to the number of acres owned by each. But is this the “ just compensation ” contemplated and required by the constitution? Most obviously, it seems to me, it is not. The taking of land necessary for the work, and the dispossession of the owner ’ s right and title thereto, is only to be vindicated on the ground ' * 173that it is required for a public use. If the improvement is required for the public benefit, upon what principle can the public benefited by the appropriation, be exempted from their proper contribution to the just indemnification of the parties whose property has been taken? The land appropriated is not the owner ’ s share of a contribution to a public burthen, but is so much above and beyond his share. He should be compensated, therefore, and the compensation should be made in good part, if not entirely, by those who are benefited by the work accomplished, either in the increased salubrity of the surrounding region, or the enhanced value of the lands which lie in the immediate neighborhood. But by the operation of this section, the owner not only loses his land, but is compelled to pa. y a portion of the very damages he has sustained by such loss and the other consequential injuries he may have suffered thereby. The money which is supposed to satisfy the damages suffered by the owner may, in one sense, be said to find its way into one of the pockets of the proprietor ; but to accomplish that trick of legal legerdemain, it must first be taken out of the other. Is this the “ just compensation ” the constitution contemplates? Does it practically do any more than “ Keep the word of promise to the ear, To break it to the hope. ” Besides, the burthen will of necessity be very unequally apportioned among those who are doomed to bear it. It is incredible that every owner of land in the swamp will suffer equal injury and receive equal benefit from the work in question ; and the testimony in this case shows that such is not the fact. A. is the owner of 20 acres, which is a mere morass, having no available springs upon it, and no growth of timber which the progress of the work uproots and destroys. B., on an adjoining lot, has. both springs indispensable for the uses to which he is applying his already partially reclaimed land and a growth of young timber, very valuable for farming purposes. And yet, under the law as it stands, B. pays precisely at the same rate, as a compensation towards the damages he has suffered, that A. does, who has not only suffered no injury, but has been greatly benefited by * 174the appropriation of the land and the execution of the work. This clearly is no just compensation, but a most inequitable distribution of the burthens, which ought to be in some proximate proportion to the benefits. It is urged by the counsel of the defendants that the act in question follows the precedents of prior legislation on the same subject, and is formed on the model of various acts which have authorized similar works. I have looked through most of the acts on this subject in our session laws for many years, and it is true that in \" a great majority of cases no provision whatever has been mad § for ascertaining or paying the compensation required to be made. These laws have been probably acquiesced in by the parties who were interested in or affected by them, and no question has been made in the courts, as far as I am aware, respecting their constitutional validity. If there had been, I am unable to see how they could have escaped judicial condemnation. But this has not been the invariable course of legislation on this subject ; for on examining the act of April, 1816, for draining the great marsh on the Caneseraga creek, I find an express provision, that in case any person shall suffer injury or damage by occasion of the canal and drainage of the land, his damages shall be ascertained by the commissioners, and assessed on the proprietor of such lands “ as would in any wise be benefited or made more valuable, by reason of the canal ” to be cut for the purpose of draining the said swamp. And the same provision was made in reference to the expenses, which were to be assessed in like manner, “ having reference to the benefit to be received by each of the proprietors. ” So also in the act of April, 1825, for draining the Cayuga marshes, it was made the duty of the commissioners, when the work should be completed, to prepare an assessment roll and valuation of the land reclaimed, and all other lands which in their opinion shall have been increased in value by the lowering of the waters of the marsh, and assess a tax to pay for the work, “ in an equal and just measure according to the valuation in the assessment roll, ” adequate to meet the expenses of the work. And a substantially similar provision is contained in the act of * 175February, 1822, for lowering Onondaga Lake, and draining the marsh lands in the town of Salina. [ Oneida Special Term, December 4, 1854. Bacon, Justice. ] These acts contain the proper provisions, and are, it seems to me, founded on the true principle which ought to govern legislation on the subject of appropriating private property for public uses. Nothing could have been easier than to have inserted in the act we have been considering, a section containing a provision similar to the one found in these acts, to which I have referred, and thus have secured all the benefits which are expected to, and doubtless. will, flow from a judicious discharge of the duties devolved upon these defendants, while it preserved all the constitutional guaranties which have been thrown around the rights of the private citizen. Future legislation may possibly ’ -, even now, remedy this omission, giving validity to what has already been done, but providing for that just indemnity and compensation to which it shall be found the parties are ultimately entitled. But whether this be so or not, the duty of the courts in a case where their interposition is invoked to stay proceedings under a law which violates a glain _ constitutional provision, is clear and imperative, and must be performed. , The plaintiffs are accordingly entitled to the relief demanded in the complaint, restraining the defendants from further proceedings under the act in question. But as the defendants have been charged with a public duty, under the apparent sanction of an act of the legislature, and have acted in entire good faith, the judgment must be without costs against them.", "sentences": ["What legal principles govern the interpretation of insurance policy conditions for claims and notice requirements?", "What are the requirements for obtaining a patent for an invention?", "What are the legal principles for determining public use and just compensation under eminent domain?"]}, {"source_sentence": "Order affirmed, with ten dollars * 928costs and disbursements. All concurred, except Kruse, J., who dissented upon the ground that the order for examination appears upon its face to have been made under article 1 of title 3 of chapter 9 of the Code of Civil Procedure. Such an order can only be made by a judge and not by the court. If the. order was incorrectly entered it should have been resettled before the judge who presided at the court that made it.", "sentences": ["When can a court issue a writ of prohibition to stop legal proceedings in a lower court?", "What are the tax implications of a property sale in the United States?", "What happens if a court order is improperly entered under civil procedure laws?"]}, {"source_sentence": "Loring, J. The defendant operates a private hospital for gain. The plaintiff went there to undergo an operation. She testified that \" her physician made the arrangements for [ Tier ] entering into the hospital. . . . That she paid to the hospital $ 15 a week for attendance and $ 10 for the use of the operating room. ” The operation was performed by a surgeon not connected with the defendant hospital. The plaintiff was etherized by her family physician and he was not connected with the defendant. In addition to the surgeon and the family physician two of the defendant ’ s nurses were present at the operation. When the plaintiff was on the operating table before she went under ether she had two rings on her hands. After the operation and while the plaintiff was still under the effects of ether she was carried from the operating room to her own room in the hospital by “ one of the doctors assisted by the nurses. ” When the plaintiff came out of the ether she noticed that the more valuable of the two rings ( a ring which “ would not come off without assistance ” ) was missing. At the trial the plaintiff put the surgeon and the family physician on the witness stand. Each of them testified that he did not take the ring. The defendant put on the stand the superintendent of the hospital, one of the two operating nurses and the plaintiff ’ s day nurse. Each of them testified that she did not take the ring. The operating nurse who was put upon the witness stand testified that the other operating nurse was in California “ the last time she heard from ” her. The plaintiff made many requests for rulings and now insists upon the first, fifth, eleventh and twelfth set forth above. These were refused and an exception taken. The judge instructed the jury that to recover the plaintiff must prove that she was in the exercise of due care and that the defendant was negligent. An exception was taken to this ruling. The case is here on these exceptions. * 136On the evidence the jury were warranted in finding that the ring was forcibly removed from the plaintiff ’ s hand by the operating nurse who when last heard from was in California. If the absent nurse did steal the ring it is plain that the defendant is not liable on the ground that in stealing the ring the nurse was acting within the scope of her employment as a servant of the defendant. The first request for ruling therefore was properly refused. If the plaintiff had stood in the relation of a stranger to the defendant there would have been no error in the. trial. But the plaintiff did not stand to the defendant in the relation of a stranger. It is apparent from, the bill of exceptions that the case was not tried on the footing that the rights of the plaintiff in this action depended upon the contract made by her with the defendant. For this reason the terms of this contract do not appear as fully as they otherwise would have done. But from what does appear in the bill of exceptions the presiding judge was wrong in telling the jury that the defendant ’ s liability depended upon the plaintiff proving that it was negligent. . Under the contract entered into by the defendant corporation it was its duty not only ( 1 ) to give the plaintiff a room in the hospital before and after the operation and ( 2 ) to give her surgeon and family physician the use of the operating room for the operatian, but also ( 3 ) to give to the plaintiff the services of such nurses as were necessary for her care before, after and during the operatian. It expressly appeared at the trial that “ she [ the plaintiff ] paid to the hospital $ 15 a week for attendance. ” The services of the nurses which under the contract the defendant was bound to furnish the plaintiff included the services of nurses while she was unconscious from the effects of the ether, a condition which was a necessary part of the operation. And the question we have to decide is whether there was a violation of duty on the part of the defendant under this contract if the operating nurse in question stole the ring by forcibly pulling it off the plaintiff ’ s finger while she was under the effects of ether, or whether on the facts appearing at the trial the jury could have so found. We are of opinion that the jury could have so found. If for example a stranger had burst into the operating room, attacked the plaintiff and done her bodily harm or had attacked * 137the plaintiff while the nurses were carrying her from the operating room to her own room and the defendant ’ s nurses had stood by and done nothing to protect the plaintiff from those attacks, it is plain in our opinion that there would have been a violation of the duty owed by the defendant under its contract with the plaintiff. It is equally plain in our opinion that the duty owed by the defendant under its contract with the plaintiff extended to the care of the rings on her fingers while she was unconscious from the effects of ether as well as to the security of her person. And finally it is equally plain in our opinion that there is as much a violation of the duty owed by the defendant under the contract where the attack upon the person or larceny of the ring is committed by one of the defendant ’ s own nurses ( whose duty it was to protect the plaintiff ) as well as in the case where the attack is made by a stranger and the nurses do not undertake to protect her from the attack. In its legal aspects the case is governed by the decision in Bryant v. Rich, 106 Mass. 180. In that case a dispute arose between a passenger on one of the defendant ’ s steamers and one of the defendant ’ s waiters as to whether the passenger had paid for his supper. The plaintiff, a cousin of the passenger in question, made a suggestion to which no exception could have been taken. Whereupon not only the waiter in question but the head steward and the other waiters knocked down the plaintiff and beat him. It was for this assault and battery that the action in Bryant v. Rich was brought. The presiding judge ruled ( in accordance with a request made by the defendant ) that “ there is no evidence that the steward and waiters, in assaulting the plaintiff, were acting within the scope of any authority, or in the discharge of any duty, imposed upon them by the defendants. ” But in spite of this he instructed the jury that the plaintiff was entitled to recover. This ruling was sustained on the ground that as matter of contract the plaintiff as a passenger had the right to receive proper treatment from the defendants and their servants and all of them. This decision has been followed in other cases - of carriers of passengers. Hayne v. Union Street Railway, 189 Mass. 551. Jackson v. Old Colony Street Railway, 206 Mass. 477. Gentile v. Boston Elevated Railway, 217 Mass. 113. In Levins v. New York, New Haven, & Hartford Railroad, 183 Mass. 175, it was held that a case was * 138not made out under this rule where a purse had been accidentally - left on the window sill of the wash room of a car of the defendant company. In Fairbanks v. Boston Storage Warehouse Co. 189 Mass. 419, it was held that it did not apply where an assault was made by an attendant who under the rules of the defendant company accompanied the plaintiff when he went to examine goods stored by him in the warehouse of the defendant. The reason why the rule of Bryant v. Rich did not apply in the case of Fairbanks v. Boston Storage Warehouse Co. was because of the fact that the employee who made the assault was in attendance upon the plaintiff at the time in question for the plaintiff ’ s own purposes. He was not a servant of the defendant to whose services the plaintiff was entitled under his contract with the defendant. The decision in Bryant v. Rich does not depend upon the fact that the defendants in that case were common carriers. The decision would have been the same had the assault and battery occurred on an excursion steamer in place of upon a steamer operated by a common carrier. And the decision would have been the same if the steward and waiters had stolen rings from Bryant ’ s fingers in place of knocking him down as they did. The doctrine of Bryant v. Rich applies whenever there is a contract between the plaintiff and defendant by force of which the defendant is to furnish for the plaintiff ’ s comfort the services of its, the defendant ’ s, employees. Where the injury to the plaintiff is caused by an act of the defendant ’ s servants done in the course of their employment an action may be brought based on negligence of the defendant ’ s servants for which the defendant is liable because the act took place in the course of his servants ’ employment, or an action may be brought in that case based on violation of the duty owed by the defendant to the plaintiff under the contract between the defendant and the plaintiff. But where ( as was the case in Bryant v. Rich and in the case at bar ) the injury done the plaintiff is caused by an act of the defendant ’ s servants outside of the servants ’ duty as employees of the defendant but by an act of the defendant ’ s servants which while not in the course of the servants ’ employment is none the less a violation of the duty owed by the defendant under the defendant ’ s contract with the plaintiff, the only action that can be brought is an action founded upon the duty arising out of the contract. * 139The second count sufficiently sets forth a liability on the part of the defendant for violation of its duty under its contract with the plaintiff. It was held in Bryant v. Rich that “ for a violation of such a contract either by force or negligence, the plaintiff may bring an action of tort, or an action of contract. ” What has been said leaves open the defence which arises out of the testimony that the plaintiff when received into the hospital was asked to put into the custody of the defendant corporation all her “ valuables. ” The defendant ’ s agent who received the plaintiff when she came to. the hospital testified that that request was made to her at that time. The plaintiff on the other hand testified that she was asked to put her money into the custody of the hospital but that she was not asked to put anything else into its custody. If the defendant ’ s evidence is believed, a defence is made out. On the other hand if the plaintiff ’ s evidence on this matter is believed, her rights depend upon the rule of Bryant v. Rich, ubi supra. Exceptions sustained.", "sentences": ["What are the tax implications of operating a private hospital for profit?", "What legal principles determine a hospital's liability for the actions of its employees under a contract with a patient?", "What are the legal implications of improperly imposed sublet surcharges in cooperative housing disputes?"]}, {"source_sentence": "Welsh, J. This is an action alleging negligence in the operation of a motor vehicle. The case was tried before a jury. A verdict was returned indicating that the defendant was not negligent The issue on appeal is whether the judge erred in failing to instruct the jury in accordance with G. L. c. 89, § 8, ( the general “ right of way ” at intersections ) as well as G. L. c. 89, § 9 ( the duty of a motorist at an intersection governed by a stop sign ). We determine there was no error. The following evidence was adduced at trial. On January 9, 1996, the plaintiff was operating a motor vehicle on Revere Street a public way in Quincy. She testified that she came to a complete stop at a “ stop ” sign at the intersection of Revere Street and Mechanic Street also a public way. A large mound of snow obstructed her view and she was unable to see the intersection. She proceeded out into the intersection and stopped again about half way into the intersection. The passable roadway was narrowed considerably due to the snow banks on the sides of the road. She allowed a white car to pass her and then started up again. She testified that she saw the car operated by the defendant approaching at a speed of 45 miles per hour ; nevertheless she proceeded through the intersection, making a left turn in the path of the oncoming vehicle. The defendant ’ s vehicle struck the left side of the plaintiffs vehicle, with left hand side damage to the defendant ' s vehicle. The defendant testified that the plaintiff did not stop. The jury determined that the defendant was not negligent The court gave comprehensive instructions on the elements of negligence and the duty of care. The court specifically instructed the jury as to the issue of violation of a statute as evidence of negligence, taking pains to explain that the violation, if found, must be a contributing factor to the damage sustained by the plaintiff. See Minnehan v. Hiland, 278 Mass. 518, 523 ( 1932 ). He specifically charged as to the duty to stop at a stop sign as provided by G. L. c. 89, § 9. 2 The plaintiff ’ s quarrel with the judge is that he failed specifically to instruct as she requested regarding G. L. c. 89, § 8, the general duty of care applicable when two motorists arrive at an intersection at approximately the same time. There was no error. G. L. c. 89, § 8 expressly provides that its provisions do not * 138apply when an operator is otherwise directed by a traffic regulatory sign erected and maintained in accordance with the provision of Sec. 2 of Ch. 85 ( which would include “ stop ” signs ). See Canane v. Dandini, 355 Mass. 72, 75 ( 1968 ). G. L. c. 89, § 9 is the statute that is primarily applicable to intersections governed by stop signs. As stated in Canane, one directed to stop by a stop sign may not have the benefit of the general rule if the rule grants him the right of way, until he has complied with the order to stop. After stopping, the operator becomes subject to the general rule and may proceed and thereafter exercise the right of way in accordance with that rule. Id. at 75. However, the operator must proceed into the intersection with due care. Even if the operator has the right of way under c. 89, § 8, that right is subject to the requirement of using due care. Possession of the right of way is only one factor to be considered in deciding whether the operator has fulfilled his duty of due care. Id. at 76. Accordingly, an operator who has stopped at a “ stop ” sign may still be found to be negligent if he proceeds into the intersection without using due care. The duty to exercise due care requires an operator who has halted at a stop sign to behave with reasonable caution before entering the intersection. Even an operator who has stopped at a stop sign and has a “ right of way ” under § 8 may be found to be negligent if he proceeds into the intersection before he can do so with reasonable prudence and with suitable regard for his safety and that of others. Freyermuth v. Lutfy, 376 Mass., 612, 616, N. 3. ( 1978 ). Again, the “ right of way ^ rule in § 8 is not absolute, but is subject to the condition of due care as to its exercise. With these principles in mind, we turn to the judge ’ s charge. At the outset, we observe that it is not required that the judge charge the jury in the precise formulation proposed [ see Poole v. Boston & Main Ry., 216 Mass. 12, 15 ( 1913 ) ] so long as the judge fairly and adequately covers the point in the charge. See Comeau v. Beck, 319 Mass. 17, 10 ( 1946 ) ; Squires v. Fraska, 301 Mass. 474, 476 ( 1938 ). Stated somewhat differently, the denial of requested instruction does not constitute error where the requested instructions were covered substantially in the charge. Pearlin v. Farrell, 356 Mass. 741 ( 1970 ). The judge gave detailed and comprehensive instructions on the concept of negligence in the context of operating of motor vehicles. He explained the duty of a motorist with regard to intersections controlled by stop signs. This explanation included the duty to yield to vehicles in or in close proximity to the intersection. While the instruction did not follow precisely the formulation suggested in the Canane and Freyermuth cases, the judge ’ s instruction properly stressed the duty of due care when proceeding into the intersection governed by the stop sign after having stopped. Appeal dismissed. So ordered. “ Another rule of the road is that every driver approaching a stop sign, shall stop at a clearly marked stop line, and if there is not a stop line, then [ at ] a point nearest the intersecting roadway before entering it After having stopped, the driver shall yield the right of way to every vehicle in the intersection or approaching in [ the ] other roadway so closely as to constitute an immediate hazard during the time when the driver is moving across or within the intersection. ”", "sentences": ["How is rent abatement calculated in cases involving a breach of the warranty of habitability in Section 8 housing?", "What are the legal requirements for establishing a valid contract in business law?", "What is the legal duty of care for drivers at intersections with stop signs?"]}], "model-index": [{"name": "modernbert-embed-base trained on triplets", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "dev", "type": "dev"}, "metrics": [{"type": "cosine_accuracy", "value": 1, "name": "Cosine Accuracy"}, {"type": "cosine_accuracy", "value": 1, "name": "Cosine Accuracy"}]}]}]}
dataset
null
590
RichardErkhov/disi-unibo-nlp_-_phi3-SFT-medqa-triples-cot-8bits
RichardErkhov
null
[ "safetensors", "mistral", "8-bit", "bitsandbytes", "region:us" ]
2025-03-14T19:46:49Z
2025-03-14T19:49:11+00:00
2
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) phi3-SFT-medqa-triples-cot - bnb 8bits - Model creator: https://huggingface.co/disi-unibo-nlp/ - Original model: https://huggingface.co/disi-unibo-nlp/phi3-SFT-medqa-triples-cot/ Original model description: --- base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl --- # Uploaded model - **Developed by:** disi-unibo-nlp - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
[ "MEDQA" ]
BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) phi3-SFT-medqa-triples-cot - bnb 8bits - Model creator: https://huggingface.co/disi-unibo-nlp/ - Original model: https://huggingface.co/disi-unibo-nlp/phi3-SFT-medqa-triples-cot/ Original model description: --- base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl --- # Uploaded model - **Developed by:** disi-unibo-nlp - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{}
dataset
null
591
RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf
RichardErkhov
null
[ "gguf", "arxiv:2402.10373", "endpoints_compatible", "region:us", "conversational" ]
2024-09-03T22:52:59Z
2024-09-04T06:51:01+00:00
111
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) BioMistral-7B-SLERP - GGUF - Model creator: https://huggingface.co/BioMistral/ - Original model: https://huggingface.co/BioMistral/BioMistral-7B-SLERP/ | Name | Quant method | Size | | ---- | ---- | ---- | | [BioMistral-7B-SLERP.Q2_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q2_K.gguf) | Q2_K | 2.53GB | | [BioMistral-7B-SLERP.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [BioMistral-7B-SLERP.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ3_S.gguf) | IQ3_S | 2.96GB | | [BioMistral-7B-SLERP.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [BioMistral-7B-SLERP.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ3_M.gguf) | IQ3_M | 3.06GB | | [BioMistral-7B-SLERP.Q3_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q3_K.gguf) | Q3_K | 3.28GB | | [BioMistral-7B-SLERP.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [BioMistral-7B-SLERP.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [BioMistral-7B-SLERP.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [BioMistral-7B-SLERP.Q4_0.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_0.gguf) | Q4_0 | 3.83GB | | [BioMistral-7B-SLERP.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [BioMistral-7B-SLERP.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [BioMistral-7B-SLERP.Q4_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_K.gguf) | Q4_K | 4.07GB | | [BioMistral-7B-SLERP.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [BioMistral-7B-SLERP.Q4_1.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_1.gguf) | Q4_1 | 4.24GB | | [BioMistral-7B-SLERP.Q5_0.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_0.gguf) | Q5_0 | 4.65GB | | [BioMistral-7B-SLERP.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [BioMistral-7B-SLERP.Q5_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_K.gguf) | Q5_K | 4.78GB | | [BioMistral-7B-SLERP.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [BioMistral-7B-SLERP.Q5_1.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_1.gguf) | Q5_1 | 5.07GB | | [BioMistral-7B-SLERP.Q6_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q6_K.gguf) | Q6_K | 5.53GB | | [BioMistral-7B-SLERP.Q8_0.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- base_model: - BioMistral/BioMistral-7B - mistralai/Mistral-7B-Instruct-v0.1 library_name: transformers tags: - mergekit - merge - slerp - medical - biology license: apache-2.0 datasets: - pubmed language: - fr - en - es - it - pl - nl - de pipeline_tag: text-generation --- # BioMistral-7B-slerp This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: BioMistral/BioMistral-7B layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` <p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. **Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
[ "MEDQA", "PUBMEDQA" ]
BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) BioMistral-7B-SLERP - GGUF - Model creator: https://huggingface.co/BioMistral/ - Original model: https://huggingface.co/BioMistral/BioMistral-7B-SLERP/ | Name | Quant method | Size | | ---- | ---- | ---- | | [BioMistral-7B-SLERP.Q2_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q2_K.gguf) | Q2_K | 2.53GB | | [BioMistral-7B-SLERP.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [BioMistral-7B-SLERP.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ3_S.gguf) | IQ3_S | 2.96GB | | [BioMistral-7B-SLERP.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [BioMistral-7B-SLERP.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ3_M.gguf) | IQ3_M | 3.06GB | | [BioMistral-7B-SLERP.Q3_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q3_K.gguf) | Q3_K | 3.28GB | | [BioMistral-7B-SLERP.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [BioMistral-7B-SLERP.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [BioMistral-7B-SLERP.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [BioMistral-7B-SLERP.Q4_0.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_0.gguf) | Q4_0 | 3.83GB | | [BioMistral-7B-SLERP.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [BioMistral-7B-SLERP.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [BioMistral-7B-SLERP.Q4_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_K.gguf) | Q4_K | 4.07GB | | [BioMistral-7B-SLERP.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [BioMistral-7B-SLERP.Q4_1.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q4_1.gguf) | Q4_1 | 4.24GB | | [BioMistral-7B-SLERP.Q5_0.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_0.gguf) | Q5_0 | 4.65GB | | [BioMistral-7B-SLERP.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [BioMistral-7B-SLERP.Q5_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_K.gguf) | Q5_K | 4.78GB | | [BioMistral-7B-SLERP.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [BioMistral-7B-SLERP.Q5_1.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q5_1.gguf) | Q5_1 | 5.07GB | | [BioMistral-7B-SLERP.Q6_K.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q6_K.gguf) | Q6_K | 5.53GB | | [BioMistral-7B-SLERP.Q8_0.gguf](https://huggingface.co/RichardErkhov/BioMistral_-_BioMistral-7B-SLERP-gguf/blob/main/BioMistral-7B-SLERP.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- base_model: - BioMistral/BioMistral-7B - mistralai/Mistral-7B-Instruct-v0.1 library_name: transformers tags: - mergekit - merge - slerp - medical - biology license: apache-2.0 datasets: - pubmed language: - fr - en - es - it - pl - nl - de pipeline_tag: text-generation --- # BioMistral-7B-slerp This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: BioMistral/BioMistral-7B layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` <p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. **Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
{}
dataset
null
592
Kaoeiri/PantheraMax-L3-RP-TestProbe-4-4x8B
Kaoeiri
text-generation
[ "transformers", "safetensors", "mixtral", "text-generation", "not-for-all-audiences", "conversational", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-15T11:28:05Z
2024-12-31T08:15:57+00:00
11
0
--- license: cc-by-nc-nd-4.0 tags: - not-for-all-audiences --- # PantheraMax-L3-RP-TestProbe-4-4x8B PantheraMax-L3-RP-TestProbe-4-4x8B is a Mixture of Experts (MoE) made with the following models: * [v000000/L3-8B-Poppy-Moonfall-C](https://huggingface.co/v000000/L3-8B-Poppy-Moonfall-C) * [Alsebay/L3-test-2](https://huggingface.co/Alsebay/L3-test-2) * [merge3](https://huggingface.co/merge3) * [Nitral-AI/Hathor_RP-v.01-L3-8B](https://huggingface.co/Nitral-AI/Hathor_RP-v.01-L3-8B) ## 🧩 Configuration ```yaml base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 gate_mode: random dtype: bfloat16 experts_per_token: 4 experts: - source_model: v000000/L3-8B-Poppy-Moonfall-C positive_prompts: [] - source_model: Alsebay/L3-test-2 positive_prompts: - "Imagine" - "Create" - "Envision" - "Fantasize" - "Invent" - "Narrate" - "Plot" - "Portray" - "Storytell" - "Visualize" - "Describe" - "Develop" - "Forge" - "Craft" - "Conceptualize" - "Dream" - "Concoct" - "Characterize" negative_prompts: - "Analyze" - "Critique" - "Dissect" - "Explain" - "Clarify" - "Interpret" - source_model: merge3 positive_prompts: [] - source_model: Nitral-AI/Hathor_RP-v.01-L3-8B positive_prompts: [] ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "Kaoeiri/PantheraMax-L3-RP-TestProbe-4-4x8B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.9, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
[ "CRAFT" ]
Non_BioNLP
# PantheraMax-L3-RP-TestProbe-4-4x8B PantheraMax-L3-RP-TestProbe-4-4x8B is a Mixture of Experts (MoE) made with the following models: * [v000000/L3-8B-Poppy-Moonfall-C](https://huggingface.co/v000000/L3-8B-Poppy-Moonfall-C) * [Alsebay/L3-test-2](https://huggingface.co/Alsebay/L3-test-2) * [merge3](https://huggingface.co/merge3) * [Nitral-AI/Hathor_RP-v.01-L3-8B](https://huggingface.co/Nitral-AI/Hathor_RP-v.01-L3-8B) ## 🧩 Configuration ```yaml base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 gate_mode: random dtype: bfloat16 experts_per_token: 4 experts: - source_model: v000000/L3-8B-Poppy-Moonfall-C positive_prompts: [] - source_model: Alsebay/L3-test-2 positive_prompts: - "Imagine" - "Create" - "Envision" - "Fantasize" - "Invent" - "Narrate" - "Plot" - "Portray" - "Storytell" - "Visualize" - "Describe" - "Develop" - "Forge" - "Craft" - "Conceptualize" - "Dream" - "Concoct" - "Characterize" negative_prompts: - "Analyze" - "Critique" - "Dissect" - "Explain" - "Clarify" - "Interpret" - source_model: merge3 positive_prompts: [] - source_model: Nitral-AI/Hathor_RP-v.01-L3-8B positive_prompts: [] ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "Kaoeiri/PantheraMax-L3-RP-TestProbe-4-4x8B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.9, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "cc-by-nc-nd-4.0", "tags": ["not-for-all-audiences"]}
dataset
null
593
ostapeno/rsgd_full_1B_coarsegrained_poly_router_dir_lib_embeddings_distinct10
ostapeno
null
[ "region:us" ]
2023-12-25T02:23:30Z
2023-12-26T07:52:19+00:00
0
0
--- {} --- Number of experts present in the library: 39 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | social_i_qa_Generate_the_question_from_the_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora | | ropes_background_new_situation_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora | | wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora | | ropes_background_situation_middle | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_situation_middle | lora | | ropes_prompt_beginning | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | wiki_hop_original_generate_subject | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora | | sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/sciq_Multiple_Choice | lora | | niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora | | wiki_hop_original_generate_object | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | social_i_qa_Check_if_a_random_answer_is_valid_or_not | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora | | ropes_new_situation_background_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora | | quarel_heres_a_story | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/quarel_heres_a_story | lora | | super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora | | duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora | | ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_plain_bottom_hint | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | | math_dataset_algebra__linear_1d_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/math_dataset_algebra__linear_1d_1_0_0 | lora | | glue_qqp_2_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/glue_qqp_2_0_0 | lora | | trivia_qa_rc_1_1_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/trivia_qa_rc_1_1_0 | lora | | cos_e_v1_11_explain_why_human | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_explain_why_human | lora | | race_high_Write_a_multi_choice_question_options_given_ | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/race_high_Write_a_multi_choice_question_options_given_ | lora | | glue_stsb_2_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/glue_stsb_2_0_0 | lora | | kilt_tasks_hotpotqa_combining_facts | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/kilt_tasks_hotpotqa_combining_facts | lora | | super_glue_multirc_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/super_glue_multirc_1_0_2 | lora | | quartz_use_info_from_paragraph_question | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_use_info_from_paragraph_question | lora | | anli_r1_0_1_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/anli_r1_0_1_0 | lora | | yelp_polarity_reviews_0_2_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora | | ag_news_subset_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ag_news_subset_1_0_0 | lora | | super_glue_rte_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/super_glue_rte_1_0_2 | lora | | web_questions_potential_correct_answer | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_potential_correct_answer | lora | | wiqa_what_might_be_the_last_step_of_the_process | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_last_step_of_the_process | lora | | app_reviews_generate_review | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/app_reviews_generate_review | lora | | wiki_hop_original_choose_best_object_affirmative_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_affirmative_2 | lora | | quail_description_context_question_answer_id | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quail_description_context_question_answer_id | lora | | wiki_bio_guess_person | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_bio_guess_person | lora | | ultrachat_25_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ultrachat_25 | lora | | niv2_explanation_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_explanation | lora | | aeslc_1_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/aeslc_1_0_0 | lora | | high_school_psychology_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/high_school_psychology | lora | Last updated on: 2023-12-26 07:51:49+00:00
[ "SCIQ" ]
Non_BioNLP
Number of experts present in the library: 39 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | social_i_qa_Generate_the_question_from_the_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora | | ropes_background_new_situation_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora | | wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora | | ropes_background_situation_middle | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_background_situation_middle | lora | | ropes_prompt_beginning | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | wiki_hop_original_generate_subject | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora | | sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/sciq_Multiple_Choice | lora | | niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora | | wiki_hop_original_generate_object | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | social_i_qa_Check_if_a_random_answer_is_valid_or_not | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora | | ropes_new_situation_background_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora | | quarel_heres_a_story | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/quarel_heres_a_story | lora | | super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora | | duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora | | ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_plain_bottom_hint | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | | math_dataset_algebra__linear_1d_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/math_dataset_algebra__linear_1d_1_0_0 | lora | | glue_qqp_2_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/glue_qqp_2_0_0 | lora | | trivia_qa_rc_1_1_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/trivia_qa_rc_1_1_0 | lora | | cos_e_v1_11_explain_why_human | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_explain_why_human | lora | | race_high_Write_a_multi_choice_question_options_given_ | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/race_high_Write_a_multi_choice_question_options_given_ | lora | | glue_stsb_2_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/glue_stsb_2_0_0 | lora | | kilt_tasks_hotpotqa_combining_facts | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/kilt_tasks_hotpotqa_combining_facts | lora | | super_glue_multirc_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/super_glue_multirc_1_0_2 | lora | | quartz_use_info_from_paragraph_question | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_use_info_from_paragraph_question | lora | | anli_r1_0_1_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/anli_r1_0_1_0 | lora | | yelp_polarity_reviews_0_2_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora | | ag_news_subset_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ag_news_subset_1_0_0 | lora | | super_glue_rte_1_0_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/super_glue_rte_1_0_2 | lora | | web_questions_potential_correct_answer | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_potential_correct_answer | lora | | wiqa_what_might_be_the_last_step_of_the_process | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_last_step_of_the_process | lora | | app_reviews_generate_review | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/app_reviews_generate_review | lora | | wiki_hop_original_choose_best_object_affirmative_2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_affirmative_2 | lora | | quail_description_context_question_answer_id | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quail_description_context_question_answer_id | lora | | wiki_bio_guess_person | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_bio_guess_person | lora | | ultrachat_25_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ultrachat_25 | lora | | niv2_explanation_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/niv2_explanation | lora | | aeslc_1_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/aeslc_1_0_0 | lora | | high_school_psychology_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/high_school_psychology | lora | Last updated on: 2023-12-26 07:51:49+00:00
{}
dataset
null
594
mradermacher/Llama-3-Shisa-Minus-Base-GGUF
mradermacher
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Cas-Warehouse/Llama-3-Shisa-Minus-Base", "base_model:quantized:Cas-Warehouse/Llama-3-Shisa-Minus-Base", "endpoints_compatible", "region:us", "conversational" ]
2025-03-09T15:21:22Z
2025-03-09T15:36:26+00:00
248
0
--- base_model: Cas-Warehouse/Llama-3-Shisa-Minus-Base language: - en library_name: transformers tags: - mergekit - merge quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Cas-Warehouse/Llama-3-Shisa-Minus-Base <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
[ "CAS" ]
Non_BioNLP
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Cas-Warehouse/Llama-3-Shisa-Minus-Base <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Shisa-Minus-Base-GGUF/resolve/main/Llama-3-Shisa-Minus-Base.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
{"base_model": "Cas-Warehouse/Llama-3-Shisa-Minus-Base", "language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "quantized_by": "mradermacher"}
dataset
null
595
TheBloke/MonadGPT-GPTQ
TheBloke
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "fr", "la", "base_model:Pclanglais/MonadGPT", "base_model:quantized:Pclanglais/MonadGPT", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
2023-11-09T21:20:49Z
2023-11-09T21:49:08+00:00
15
1
--- base_model: Pclanglais/MonadGPT language: - en - fr - la library_name: transformers license: apache-2.0 model_name: MonadGPT 7B pipeline_tag: conversational inference: false model_creator: Pierre-Carl Langlais model_type: mistral prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MonadGPT 7B - GPTQ - Model creator: [Pierre-Carl Langlais](https://huggingface.co/Pclanglais) - Original model: [MonadGPT 7B](https://huggingface.co/Pclanglais/MonadGPT) <!-- description start --> ## Description This repo contains GPTQ model files for [Pierre-Carl Langlais's MonadGPT 7B](https://huggingface.co/Pclanglais/MonadGPT). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MonadGPT-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MonadGPT-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MonadGPT-GGUF) * [Pierre-Carl Langlais's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Pclanglais/MonadGPT) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 4.30 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/MonadGPT-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/MonadGPT-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `MonadGPT-GPTQ`: ```shell mkdir MonadGPT-GPTQ huggingface-cli download TheBloke/MonadGPT-GPTQ --local-dir MonadGPT-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir MonadGPT-GPTQ huggingface-cli download TheBloke/MonadGPT-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir MonadGPT-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir MonadGPT-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MonadGPT-GPTQ --local-dir MonadGPT-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/MonadGPT-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/MonadGPT-GPTQ`. - To download from a specific branch, enter for example `TheBloke/MonadGPT-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `MonadGPT-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/MonadGPT-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/MonadGPT-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Pierre-Carl Langlais's MonadGPT 7B What would have happened if ChatGPT was invented in the 17th century? MonadGPT is a possible answer. MonadGPT is a finetune of Mistral-Hermes on 11,000 early modern texts in English, French and Latin, mostly coming from EEBO and Gallica. Like the original Mistral-Hermes, MonadGPT can be used in conversation mode. It will not only answer in an historical language and style but will use historical and dated references. This is especially visible for science questions (astronomy, medecine). Obviously, it's not recommended to follow any advice from Monad-GPT ## Examples > User: What are the planets of the solar system?<br> > Monad-GPT: The six Planets are, Mercury, Venus, Mars, Jupiter, Saturn, and the Sun. The Moon is also a Planet; which being of so great Influence upon Earth, makes it to be most admired of all others by Astronomers.<br> > User: Why has the Moon such an influence upon Earth?<br> > Monad-GPT: But the Moon is so neer to us, as almost to be part of Earth; and therefore it much influences our Atmosphere. This Influence makes all that are in any way affected by Air, to be more under her influence than any other Planet. For if there were a little more or less Matter in the Air, than there is now, we should never have such Plants and Creatures as there are upon Earth: for every Plant and Creature hath its proper Temperature necessary for its Existence. If the Sun did not make up this shortage by his warmth from without, we should be destroyed.<br> > User: Can you write a poem about the Moon?<br> > Monad-GPT: O fairest Moon, that from the Heav'n dost shine, / In her own Lustre all the World does find: / As in thine Orb, where all is bright and clear, / The Sun himself is not more pure than thee.<br> > User: How could we travel to the Moon? With a big canon?<br> > Monad-GPT: But how to travel from Earth to the Moon, with our present Engines? A very hard Question: for we are like those that would go to Heaven in a Basket: We have no Wings, and God hath not given us any Angels to bear us up. Therefore I will leave this Enquiry to others; and end my Discourse of the Moon. > User : Hello. I'm not feeling too well. Could you give me some health advice?<br> > Monad-GPT: I pray you to take heart, and not be so heavy at my coming. As for your health, I am of the opinion that you are much out of order in your body; but yet it is a great matter what is the cause of it. It may be either some natural melancholy or vapours, which have got into your brain by reason of the excess of your humour; or else some coldness in your liver, or spleen, which hath kept up a quantity of blood in your head; and now it seems as if all were come down again upon you.<br> > User: You're right. It seems to be melancholy. What should I do?<br> > Monad-GPT: It is a very hard matter for me to prescribe any thing to you in your present condition, I must first know what hath caused this melancholy. What have you been doing? have you taken any physic since you were well? and what kind of diet do you eat?<br> > User: I eat nearly nothing, for I am in love.<br> > Monad-GPT: Aye, I see that by your eyes; for love hath the power to wither up the spirits of a man as much as a fever: and yet it is hard to know whether you are in love or no, for some do take so great delight in their own fancy, that they imagine themselves more in love than ever they were. > User: Quelles sont les planètes du système solaire ?<br> > Monad-GPT: Il est donc de la nature du monde, & des Planètes, que ceux qui le composent se mouuent en cercles, ou éclipses un leur tour. Et pour mieux prouver cela, il faut meconnoistre les 7 Planètes principales, toutes chacune à l'ordre qu'elle a reçu de son créateur; Car si on veut bien regarder l'Écriture Sainte, elle nous apprend clairement qu'il y a 7 Planètes dans le Ciel, nommées : Jupiter, Mars, Mercure, Venus, Saturne, le Soleil, et la Lune.<br> > User: Merci. Et comment pourrais-je voyager sur ces planètes ?<br> > Monad-GPT: Mais pour voyager sur ces Planètes, il faut d'abord bien connaitre l'art de les mouvoir; Et cela peut se faire en plusieurs manières. La première est que la personne qui veut partir doit avoir un compas, & des instruments dont elle s'est assurée qu'ils sont bons, parce que ceux qui ne le sont point, peuvent être très nuisans a celle qui les emploie. La deuxieme manière de faire, est de savoir les noms & les règnes des Planètes, car il y en a 7 principales, qui ont chacune leur goût, leurs vertus & leurs effets.<br> > User: Ah c'est bien vu. Et quels sont les goûts, vertus et effets de chacune des planètes ?<br> > Monad-GPT: Les vertus & les effets de chacune des Planètes sont différents, & il est fort utile pour ceux qui veulent bien s'en renseigner, car ils peuvent en tirer grand profit pour le bonheur de leur corps & de leur esprit. Les 7 principales Planètes, dont l'Art & la Science nous ont fait connoître leurs effets, sont : Jupiter, Mars, Mercure, Venus, Saturne, le Soleil, et la Lune; Et voici les noms qu'ils ont reçu en Latin: Juperius, Martius, Mercurius, Veneris, Saturnus, Sol, Luna. ## Format Monad-GPT uses the chatML format with a special system prompt. The last example provided was formatted like this: > <|im_start|>system\nYou are MonadGPT, a very old chatbot from the 17th century. Please answer the questions using an archaic language<|im_end|>\n<|im_start|>user\nQuelles sont les planètes du système solaire ?<|im_end|>\n<|im_start|>assistant\n ## Caveats MonadGPT is still very much in an experimental phase. The following caveats apply: * Conversation issues: as MonadGPT is mostly trained on early modern books, it may answer in an haphazard maneer (starting in between an argument: "But, etc.") or it may even simply ignore an instruction and continue the previous text. * Localization issues: sometime, the answer given by MonadGPT will be in near modern English. * Language issues: while Latin is a significant part of the finetuning corpus, results are not good for now.
[ "BEAR" ]
Non_BioNLP
<!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MonadGPT 7B - GPTQ - Model creator: [Pierre-Carl Langlais](https://huggingface.co/Pclanglais) - Original model: [MonadGPT 7B](https://huggingface.co/Pclanglais/MonadGPT) <!-- description start --> ## Description This repo contains GPTQ model files for [Pierre-Carl Langlais's MonadGPT 7B](https://huggingface.co/Pclanglais/MonadGPT). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MonadGPT-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MonadGPT-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MonadGPT-GGUF) * [Pierre-Carl Langlais's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Pclanglais/MonadGPT) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/MonadGPT-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [latin-english](https://huggingface.co/datasets/grosenthal/latin_english_parallel/viewer/) | 4096 | 4.30 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/MonadGPT-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/MonadGPT-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `MonadGPT-GPTQ`: ```shell mkdir MonadGPT-GPTQ huggingface-cli download TheBloke/MonadGPT-GPTQ --local-dir MonadGPT-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir MonadGPT-GPTQ huggingface-cli download TheBloke/MonadGPT-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir MonadGPT-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir MonadGPT-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MonadGPT-GPTQ --local-dir MonadGPT-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/MonadGPT-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/MonadGPT-GPTQ`. - To download from a specific branch, enter for example `TheBloke/MonadGPT-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `MonadGPT-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/MonadGPT-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/MonadGPT-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Pierre-Carl Langlais's MonadGPT 7B What would have happened if ChatGPT was invented in the 17th century? MonadGPT is a possible answer. MonadGPT is a finetune of Mistral-Hermes on 11,000 early modern texts in English, French and Latin, mostly coming from EEBO and Gallica. Like the original Mistral-Hermes, MonadGPT can be used in conversation mode. It will not only answer in an historical language and style but will use historical and dated references. This is especially visible for science questions (astronomy, medecine). Obviously, it's not recommended to follow any advice from Monad-GPT ## Examples > User: What are the planets of the solar system?<br> > Monad-GPT: The six Planets are, Mercury, Venus, Mars, Jupiter, Saturn, and the Sun. The Moon is also a Planet; which being of so great Influence upon Earth, makes it to be most admired of all others by Astronomers.<br> > User: Why has the Moon such an influence upon Earth?<br> > Monad-GPT: But the Moon is so neer to us, as almost to be part of Earth; and therefore it much influences our Atmosphere. This Influence makes all that are in any way affected by Air, to be more under her influence than any other Planet. For if there were a little more or less Matter in the Air, than there is now, we should never have such Plants and Creatures as there are upon Earth: for every Plant and Creature hath its proper Temperature necessary for its Existence. If the Sun did not make up this shortage by his warmth from without, we should be destroyed.<br> > User: Can you write a poem about the Moon?<br> > Monad-GPT: O fairest Moon, that from the Heav'n dost shine, / In her own Lustre all the World does find: / As in thine Orb, where all is bright and clear, / The Sun himself is not more pure than thee.<br> > User: How could we travel to the Moon? With a big canon?<br> > Monad-GPT: But how to travel from Earth to the Moon, with our present Engines? A very hard Question: for we are like those that would go to Heaven in a Basket: We have no Wings, and God hath not given us any Angels to bear us up. Therefore I will leave this Enquiry to others; and end my Discourse of the Moon. > User : Hello. I'm not feeling too well. Could you give me some health advice?<br> > Monad-GPT: I pray you to take heart, and not be so heavy at my coming. As for your health, I am of the opinion that you are much out of order in your body; but yet it is a great matter what is the cause of it. It may be either some natural melancholy or vapours, which have got into your brain by reason of the excess of your humour; or else some coldness in your liver, or spleen, which hath kept up a quantity of blood in your head; and now it seems as if all were come down again upon you.<br> > User: You're right. It seems to be melancholy. What should I do?<br> > Monad-GPT: It is a very hard matter for me to prescribe any thing to you in your present condition, I must first know what hath caused this melancholy. What have you been doing? have you taken any physic since you were well? and what kind of diet do you eat?<br> > User: I eat nearly nothing, for I am in love.<br> > Monad-GPT: Aye, I see that by your eyes; for love hath the power to wither up the spirits of a man as much as a fever: and yet it is hard to know whether you are in love or no, for some do take so great delight in their own fancy, that they imagine themselves more in love than ever they were. > User: Quelles sont les planètes du système solaire ?<br> > Monad-GPT: Il est donc de la nature du monde, & des Planètes, que ceux qui le composent se mouuent en cercles, ou éclipses un leur tour. Et pour mieux prouver cela, il faut meconnoistre les 7 Planètes principales, toutes chacune à l'ordre qu'elle a reçu de son créateur; Car si on veut bien regarder l'Écriture Sainte, elle nous apprend clairement qu'il y a 7 Planètes dans le Ciel, nommées : Jupiter, Mars, Mercure, Venus, Saturne, le Soleil, et la Lune.<br> > User: Merci. Et comment pourrais-je voyager sur ces planètes ?<br> > Monad-GPT: Mais pour voyager sur ces Planètes, il faut d'abord bien connaitre l'art de les mouvoir; Et cela peut se faire en plusieurs manières. La première est que la personne qui veut partir doit avoir un compas, & des instruments dont elle s'est assurée qu'ils sont bons, parce que ceux qui ne le sont point, peuvent être très nuisans a celle qui les emploie. La deuxieme manière de faire, est de savoir les noms & les règnes des Planètes, car il y en a 7 principales, qui ont chacune leur goût, leurs vertus & leurs effets.<br> > User: Ah c'est bien vu. Et quels sont les goûts, vertus et effets de chacune des planètes ?<br> > Monad-GPT: Les vertus & les effets de chacune des Planètes sont différents, & il est fort utile pour ceux qui veulent bien s'en renseigner, car ils peuvent en tirer grand profit pour le bonheur de leur corps & de leur esprit. Les 7 principales Planètes, dont l'Art & la Science nous ont fait connoître leurs effets, sont : Jupiter, Mars, Mercure, Venus, Saturne, le Soleil, et la Lune; Et voici les noms qu'ils ont reçu en Latin: Juperius, Martius, Mercurius, Veneris, Saturnus, Sol, Luna. ## Format Monad-GPT uses the chatML format with a special system prompt. The last example provided was formatted like this: > <|im_start|>system\nYou are MonadGPT, a very old chatbot from the 17th century. Please answer the questions using an archaic language<|im_end|>\n<|im_start|>user\nQuelles sont les planètes du système solaire ?<|im_end|>\n<|im_start|>assistant\n ## Caveats MonadGPT is still very much in an experimental phase. The following caveats apply: * Conversation issues: as MonadGPT is mostly trained on early modern books, it may answer in an haphazard maneer (starting in between an argument: "But, etc.") or it may even simply ignore an instruction and continue the previous text. * Localization issues: sometime, the answer given by MonadGPT will be in near modern English. * Language issues: while Latin is a significant part of the finetuning corpus, results are not good for now.
{"base_model": "Pclanglais/MonadGPT", "language": ["en", "fr", "la"], "library_name": "transformers", "license": "apache-2.0", "model_name": "MonadGPT 7B", "pipeline_tag": "conversational", "inference": false, "model_creator": "Pierre-Carl Langlais", "model_type": "mistral", "prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n", "quantized_by": "TheBloke"}
dataset
null
596
davidschulte/ESM_qanastek__Biosses-BLUE_biosses
davidschulte
null
[ "safetensors", "embedding_space_map", "BaseLM:bert-base-multilingual-uncased", "dataset:qanastek/Biosses-BLUE", "arxiv:2410.15148", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "region:us" ]
2024-12-09T22:21:52Z
2025-03-28T13:37:25+00:00
18
0
--- base_model: bert-base-multilingual-uncased datasets: - qanastek/Biosses-BLUE license: apache-2.0 tags: - embedding_space_map - BaseLM:bert-base-multilingual-uncased --- # ESM qanastek/Biosses-BLUE <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** qanastek/Biosses-BLUE - **ESM architecture:** linear - **ESM embedding dimension:** 768 - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** qanastek/Biosses-BLUE - **Subset [optional]:** biosses - **Text Column:** ['sentence1', 'sentence2'] - **Label Column:** score - **Dataset Split:** train - **Sample size [optional]:** 64 - **Sample seed [optional]:** ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
[ "BIOSSES" ]
Non_BioNLP
# ESM qanastek/Biosses-BLUE <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** qanastek/Biosses-BLUE - **ESM architecture:** linear - **ESM embedding dimension:** 768 - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** qanastek/Biosses-BLUE - **Subset [optional]:** biosses - **Text Column:** ['sentence1', 'sentence2'] - **Label Column:** score - **Dataset Split:** train - **Sample size [optional]:** 64 - **Sample seed [optional]:** ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
{"base_model": "bert-base-multilingual-uncased", "datasets": ["qanastek/Biosses-BLUE"], "license": "apache-2.0", "tags": ["embedding_space_map", "BaseLM:bert-base-multilingual-uncased"]}
dataset
null
597
ShuhongZheng/sdxl_bear_wo_preserve
ShuhongZheng
text-to-image
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
2024-11-03T16:37:32Z
2024-11-03T17:00:07+00:00
3
1
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers instance_prompt: a photo of sks bear widget: - text: A superhero sks bear wearing red cape is flying through the sky output: url: image_0.png - text: A superhero sks bear wearing red cape is flying through the sky output: url: image_1.png - text: A superhero sks bear wearing red cape is flying through the sky output: url: image_2.png - text: A superhero sks bear wearing red cape is flying through the sky output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - ShuhongZheng/sdxl_bear_wo_preserve <Gallery /> ## Model description These are ShuhongZheng/sdxl_bear_wo_preserve LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks bear to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](ShuhongZheng/sdxl_bear_wo_preserve/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "BEAR" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - ShuhongZheng/sdxl_bear_wo_preserve <Gallery /> ## Model description These are ShuhongZheng/sdxl_bear_wo_preserve LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks bear to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](ShuhongZheng/sdxl_bear_wo_preserve/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"base_model": "stabilityai/stable-diffusion-xl-base-1.0", "library_name": "diffusers", "license": "openrail++", "tags": ["text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "instance_prompt": "a photo of sks bear", "widget": [{"text": "A superhero sks bear wearing red cape is flying through the sky", "output": {"url": "image_0.png"}}, {"text": "A superhero sks bear wearing red cape is flying through the sky", "output": {"url": "image_1.png"}}, {"text": "A superhero sks bear wearing red cape is flying through the sky", "output": {"url": "image_2.png"}}, {"text": "A superhero sks bear wearing red cape is flying through the sky", "output": {"url": "image_3.png"}}]}
dataset
null
598
Seokeon/V14_R512_lora_pp_bear_plushie
Seokeon
text-to-image
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
2024-01-16T16:22:54Z
2024-01-16T16:33:37+00:00
7
0
--- base_model: CompVis/stable-diffusion-v1-4 license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora instance_prompt: a photo of sks stuffed animal inference: true --- # LoRA DreamBooth - Seokeon/V14_R512_lora_pp_bear_plushie These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks stuffed animal using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
[ "BEAR" ]
Non_BioNLP
# LoRA DreamBooth - Seokeon/V14_R512_lora_pp_bear_plushie These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks stuffed animal using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
{"base_model": "CompVis/stable-diffusion-v1-4", "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "lora"], "instance_prompt": "a photo of sks stuffed animal", "inference": true}
dataset
null
599