Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
e164893
·
verified ·
1 Parent(s): f018399

Scheduled Commit

Browse files
data/retrieval_battle-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl CHANGED
@@ -24,3 +24,5 @@
24
  {"tstamp": 1722369409.8742, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "2fd990202605484187eda3a456338321", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "what is inputs?", "0_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "0_corpus": "wikipedia", "1_conv_id": "65e3bf41b47045d3b59a334b4eed1be8", "1_model_name": "embed-english-v3.0", "1_prompt": "what is inputs?", "1_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "1_corpus": "wikipedia"}
25
  {"tstamp": 1722369669.9495, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "8e512b9a9fd44aa18dd3e5eb0ee834f7", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "I am looking for recent work on generating text with personalities and style", "0_output": [["I am looking for recent work on generating text with personalities and style", "Title: Cloning Ideology and Style using Deep Learning\n\nAbstract: Text generation tasks have gotten the attention of researchers in the last few years because of their applications on a large scale.In the past, many researchers focused on task-based text generations.Our research focuses on text generation based on the ideology and style of a specific author, and text generation on a topic that was not written by the same author in the past.Our trained model requires an input prompt containing initial few words of text to produce a few paragraphs of text based on the ideology and style of the author on which the model is trained.Our methodology to accomplish this task is based on Bi-LSTM.The Bi-LSTM model is used to make predictions at the character level, during the training corpus of a specific author is used along with the ground truth corpus.A pre-trained model is used to identify the sentences of ground truth having contradiction with the author's corpus to make our language model inclined.During training, we have achieved a perplexity score of 2.23 at the character level. The experiments show a perplexity score of around 3 over the test dataset."]], "0_corpus": "arxiv", "1_conv_id": "c97244ec7bbd46318ccc6f76171f91fd", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am looking for recent work on generating text with personalities and style", "1_output": [["I am looking for recent work on generating text with personalities and style", "Title: Text-to-Image Synthesis for Any Artistic Styles: Advancements in Personalized Artistic Image Generation via Subdivision and Dual Binding\n\nAbstract: Recent advancements in text-to-image models, such as Stable Diffusion, have demonstrated their ability to synthesize visual images through natural language prompts. One approach of personalizing text-to-image models, exemplified by DreamBooth, fine-tunes the pre-trained model by binding unique text identifiers with a few images of a specific subject. Although existing fine-tuning methods have demonstrated competence in rendering images according to the styles of famous painters, it is still challenging to learn to produce images encapsulating distinct art styles due to abstract and broad visual perceptions of stylistic attributes such as lines, shapes, textures, and colors. In this paper, we introduce a new method, Single-StyleForge, for personalization. It fine-tunes pre-trained text-to-image diffusion models to generate diverse images in specified styles from text prompts. By using around 15-20 images of the target style, the approach establishes a foundational binding of a unique token identifier with a broad range of the target style. It also utilizes auxiliary images to strengthen this binding, resulting in offering specific guidance on representing elements such as persons in a target style-consistent manner. In addition, we present ways to improve the quality of style and text-image alignment through a method called Multi-StyleForge, which inherits the strategy used in StyleForge and learns tokens in multiple. Experimental evaluation conducted on six distinct artistic styles demonstrates substantial improvements in both the quality of generated images and the perceptual fidelity metrics, such as FID, KID, and CLIP scores."]], "1_corpus": "arxiv"}
26
  {"tstamp": 1722370295.4157, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "cd70dab8da344dee861341fe526fc33f", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "0_output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "0_corpus": "arxiv", "1_conv_id": "44c3418ddda44ffcbe5214c14c1a99f6", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "1_output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "1_corpus": "arxiv"}
 
 
 
24
  {"tstamp": 1722369409.8742, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "2fd990202605484187eda3a456338321", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "what is inputs?", "0_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "0_corpus": "wikipedia", "1_conv_id": "65e3bf41b47045d3b59a334b4eed1be8", "1_model_name": "embed-english-v3.0", "1_prompt": "what is inputs?", "1_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "1_corpus": "wikipedia"}
25
  {"tstamp": 1722369669.9495, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "8e512b9a9fd44aa18dd3e5eb0ee834f7", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "I am looking for recent work on generating text with personalities and style", "0_output": [["I am looking for recent work on generating text with personalities and style", "Title: Cloning Ideology and Style using Deep Learning\n\nAbstract: Text generation tasks have gotten the attention of researchers in the last few years because of their applications on a large scale.In the past, many researchers focused on task-based text generations.Our research focuses on text generation based on the ideology and style of a specific author, and text generation on a topic that was not written by the same author in the past.Our trained model requires an input prompt containing initial few words of text to produce a few paragraphs of text based on the ideology and style of the author on which the model is trained.Our methodology to accomplish this task is based on Bi-LSTM.The Bi-LSTM model is used to make predictions at the character level, during the training corpus of a specific author is used along with the ground truth corpus.A pre-trained model is used to identify the sentences of ground truth having contradiction with the author's corpus to make our language model inclined.During training, we have achieved a perplexity score of 2.23 at the character level. The experiments show a perplexity score of around 3 over the test dataset."]], "0_corpus": "arxiv", "1_conv_id": "c97244ec7bbd46318ccc6f76171f91fd", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am looking for recent work on generating text with personalities and style", "1_output": [["I am looking for recent work on generating text with personalities and style", "Title: Text-to-Image Synthesis for Any Artistic Styles: Advancements in Personalized Artistic Image Generation via Subdivision and Dual Binding\n\nAbstract: Recent advancements in text-to-image models, such as Stable Diffusion, have demonstrated their ability to synthesize visual images through natural language prompts. One approach of personalizing text-to-image models, exemplified by DreamBooth, fine-tunes the pre-trained model by binding unique text identifiers with a few images of a specific subject. Although existing fine-tuning methods have demonstrated competence in rendering images according to the styles of famous painters, it is still challenging to learn to produce images encapsulating distinct art styles due to abstract and broad visual perceptions of stylistic attributes such as lines, shapes, textures, and colors. In this paper, we introduce a new method, Single-StyleForge, for personalization. It fine-tunes pre-trained text-to-image diffusion models to generate diverse images in specified styles from text prompts. By using around 15-20 images of the target style, the approach establishes a foundational binding of a unique token identifier with a broad range of the target style. It also utilizes auxiliary images to strengthen this binding, resulting in offering specific guidance on representing elements such as persons in a target style-consistent manner. In addition, we present ways to improve the quality of style and text-image alignment through a method called Multi-StyleForge, which inherits the strategy used in StyleForge and learns tokens in multiple. Experimental evaluation conducted on six distinct artistic styles demonstrates substantial improvements in both the quality of generated images and the perceptual fidelity metrics, such as FID, KID, and CLIP scores."]], "1_corpus": "arxiv"}
26
  {"tstamp": 1722370295.4157, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "cd70dab8da344dee861341fe526fc33f", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "0_output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "0_corpus": "arxiv", "1_conv_id": "44c3418ddda44ffcbe5214c14c1a99f6", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "1_output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "1_corpus": "arxiv"}
27
+ {"tstamp": 1722370780.9276, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "bd4285db88334dfe9e0184d7e0221d72", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Data Mixing Made Efficient: A Bivariate Scaling Law for Language Model Pretraining\n\nAbstract: Large language models exhibit exceptional generalization capabilities, primarily attributed to the utilization of diversely sourced data. However, conventional practices in integrating this diverse data heavily rely on heuristic schemes, lacking theoretical guidance. This research tackles these limitations by investigating strategies based on low-cost proxies for data mixtures, with the aim of streamlining data curation to enhance training efficiency. Specifically, we propose a unified scaling law, termed BiMix, which accurately models the bivariate scaling behaviors of both data quantity and mixing proportions. We conduct systematic experiments and provide empirical evidence for the predictive power and fundamental principles of BiMix. Notably, our findings reveal that entropy-driven training-free data mixtures can achieve comparable or even better performance than more resource-intensive methods. We hope that our quantitative insights can shed light on further judicious research and development in cost-effective language modeling."]], "0_corpus": "arxiv", "1_conv_id": "83c8271e51c54db3a59a8bde76f8eefd", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Like Sparrows on a Clothes Line: The Self-Organization of Random Number Sequences\n\nAbstract: We study sequences of random numbers {Z[1],Z[2],Z[3],...,Z[n]} -- which can be considered random walks with reflecting barriers -- and define their \"types\" according to whether Z[i] > Z[i+1], (a down-movement), or Z[i] < Z[i+1] (up-movement). This paper examines the means, xi, to which the Zi converge, when a large number of sequences of the same type is considered. It is shown that these means organize themselves in such a way that, between two turning points of the sequence, they are equidistant from one another. We also show that m steps in one direction tend to offset one step in the other direction, as m -> infinity. Key words:random number sequence, self-organization, random walk, reflecting barriers."]], "1_corpus": "arxiv"}
28
+ {"tstamp": 1722371024.6258, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c52fa11dbc924d5b9006f7ce087ccd38", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Capital of france", "0_output": [["Capital of france", "List of capitals of France\n\nThis is a chronological list of capitals of France. The capital of France has been Paris since its liberation in 1944."]], "0_corpus": "wikipedia", "1_conv_id": "c6d07e14b3cb44b7bb1da9f498d186d9", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Capital of france", "1_output": [["Capital of france", "List of capitals of France\n\nThis is a chronological list of capitals of France. The capital of France has been Paris since its liberation in 1944."]], "1_corpus": "wikipedia"}
data/retrieval_individual-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl CHANGED
@@ -104,3 +104,8 @@
104
  {"tstamp": 1722369712.2347, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722369711.5356, "finish": 1722369712.2347, "ip": "", "conv_id": "057320225ef24795adb273a944b2cc91", "model_name": "text-embedding-004", "prompt": "What are the landmark work on generating image captions with sentiments?", "output": [["What are the landmark work on generating image captions with sentiments?", "Title: Image Captioning at Will: A Versatile Scheme for Effectively Injecting Sentiments into Image Descriptions\n\nAbstract: Automatic image captioning has recently approached human-level performance due to the latest advances in computer vision and natural language understanding. However, most of the current models can only generate plain factual descriptions about the content of a given image. However, for human beings, image caption writing is quite flexible and diverse, where additional language dimensions, such as emotion, humor and language styles, are often incorporated to produce diverse, emotional, or appealing captions. In particular, we are interested in generating sentiment-conveying image descriptions, which has received little attention. The main challenge is how to effectively inject sentiments into the generated captions without altering the semantic matching between the visual content and the generated descriptions. In this work, we propose two different models, which employ different schemes for injecting sentiments into image captions. Compared with the few existing approaches, the proposed models are much simpler and yet more effective. The experimental results show that our model outperform the state-of-the-art models in generating sentimental (i.e., sentiment-bearing) image captions. In addition, we can also easily manipulate the model by assigning different sentiments to the testing image to generate captions with the corresponding sentiments."]], "corpus": "arxiv"}
105
  {"tstamp": 1722370282.9393, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722370282.5878, "finish": 1722370282.9393, "ip": "", "conv_id": "cd70dab8da344dee861341fe526fc33f", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "corpus": "arxiv"}
106
  {"tstamp": 1722370282.9393, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722370282.5878, "finish": 1722370282.9393, "ip": "", "conv_id": "44c3418ddda44ffcbe5214c14c1a99f6", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "corpus": "arxiv"}
 
 
 
 
 
 
104
  {"tstamp": 1722369712.2347, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722369711.5356, "finish": 1722369712.2347, "ip": "", "conv_id": "057320225ef24795adb273a944b2cc91", "model_name": "text-embedding-004", "prompt": "What are the landmark work on generating image captions with sentiments?", "output": [["What are the landmark work on generating image captions with sentiments?", "Title: Image Captioning at Will: A Versatile Scheme for Effectively Injecting Sentiments into Image Descriptions\n\nAbstract: Automatic image captioning has recently approached human-level performance due to the latest advances in computer vision and natural language understanding. However, most of the current models can only generate plain factual descriptions about the content of a given image. However, for human beings, image caption writing is quite flexible and diverse, where additional language dimensions, such as emotion, humor and language styles, are often incorporated to produce diverse, emotional, or appealing captions. In particular, we are interested in generating sentiment-conveying image descriptions, which has received little attention. The main challenge is how to effectively inject sentiments into the generated captions without altering the semantic matching between the visual content and the generated descriptions. In this work, we propose two different models, which employ different schemes for injecting sentiments into image captions. Compared with the few existing approaches, the proposed models are much simpler and yet more effective. The experimental results show that our model outperform the state-of-the-art models in generating sentimental (i.e., sentiment-bearing) image captions. In addition, we can also easily manipulate the model by assigning different sentiments to the testing image to generate captions with the corresponding sentiments."]], "corpus": "arxiv"}
105
  {"tstamp": 1722370282.9393, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722370282.5878, "finish": 1722370282.9393, "ip": "", "conv_id": "cd70dab8da344dee861341fe526fc33f", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "corpus": "arxiv"}
106
  {"tstamp": 1722370282.9393, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722370282.5878, "finish": 1722370282.9393, "ip": "", "conv_id": "44c3418ddda44ffcbe5214c14c1a99f6", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "corpus": "arxiv"}
107
+ {"tstamp": 1722370776.4848, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722370776.2701, "finish": 1722370776.4848, "ip": "", "conv_id": "bd4285db88334dfe9e0184d7e0221d72", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Data Mixing Made Efficient: A Bivariate Scaling Law for Language Model Pretraining\n\nAbstract: Large language models exhibit exceptional generalization capabilities, primarily attributed to the utilization of diversely sourced data. However, conventional practices in integrating this diverse data heavily rely on heuristic schemes, lacking theoretical guidance. This research tackles these limitations by investigating strategies based on low-cost proxies for data mixtures, with the aim of streamlining data curation to enhance training efficiency. Specifically, we propose a unified scaling law, termed BiMix, which accurately models the bivariate scaling behaviors of both data quantity and mixing proportions. We conduct systematic experiments and provide empirical evidence for the predictive power and fundamental principles of BiMix. Notably, our findings reveal that entropy-driven training-free data mixtures can achieve comparable or even better performance than more resource-intensive methods. We hope that our quantitative insights can shed light on further judicious research and development in cost-effective language modeling."]], "corpus": "arxiv"}
108
+ {"tstamp": 1722370776.4848, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722370776.2701, "finish": 1722370776.4848, "ip": "", "conv_id": "83c8271e51c54db3a59a8bde76f8eefd", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Like Sparrows on a Clothes Line: The Self-Organization of Random Number Sequences\n\nAbstract: We study sequences of random numbers {Z[1],Z[2],Z[3],...,Z[n]} -- which can be considered random walks with reflecting barriers -- and define their \"types\" according to whether Z[i] > Z[i+1], (a down-movement), or Z[i] < Z[i+1] (up-movement). This paper examines the means, xi, to which the Zi converge, when a large number of sequences of the same type is considered. It is shown that these means organize themselves in such a way that, between two turning points of the sequence, they are equidistant from one another. We also show that m steps in one direction tend to offset one step in the other direction, as m -> infinity. Key words:random number sequence, self-organization, random walk, reflecting barriers."]], "corpus": "arxiv"}
109
+ {"tstamp": 1722370841.8196, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722370839.6311, "finish": 1722370841.8196, "ip": "", "conv_id": "ae59ce754bc84ed3a34adc531b058d3c", "model_name": "GritLM/GritLM-7B", "prompt": "Explain GraphRAG and it's benefits and drawbacks vs standard RAG", "output": [["Explain GraphRAG and it's benefits and drawbacks vs standard RAG", "Title: Don't Forget to Connect! Improving RAG with Graph-based Reranking\n\nAbstract: Retrieval Augmented Generation (RAG) has greatly improved the performance of Large Language Model (LLM) responses by grounding generation with context from existing documents. These systems work well when documents are clearly relevant to a question context. But what about when a document has partial information, or less obvious connections to the context? And how should we reason about connections between documents? In this work, we seek to answer these two core questions about RAG generation. We introduce G-RAG, a reranker based on graph neural networks (GNNs) between the retriever and reader in RAG. Our method combines both connections between documents and semantic information (via Abstract Meaning Representation graphs) to provide a context-informed ranker for RAG. G-RAG outperforms state-of-the-art approaches while having smaller computational footprint. Additionally, we assess the performance of PaLM 2 as a reranker and find it to significantly underperform G-RAG. This result emphasizes the importance of reranking for RAG even when using Large Language Models."]], "corpus": "arxiv"}
110
+ {"tstamp": 1722371016.092, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722371013.8544, "finish": 1722371016.092, "ip": "", "conv_id": "c52fa11dbc924d5b9006f7ce087ccd38", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Capital of france", "output": [["Capital of france", "List of capitals of France\n\nThis is a chronological list of capitals of France. The capital of France has been Paris since its liberation in 1944."]], "corpus": "wikipedia"}
111
+ {"tstamp": 1722371016.092, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722371013.8544, "finish": 1722371016.092, "ip": "", "conv_id": "c6d07e14b3cb44b7bb1da9f498d186d9", "model_name": "GritLM/GritLM-7B", "prompt": "Capital of france", "output": [["Capital of france", "List of capitals of France\n\nThis is a chronological list of capitals of France. The capital of France has been Paris since its liberation in 1944."]], "corpus": "wikipedia"}
data/sts_individual-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl CHANGED
@@ -6,3 +6,4 @@
6
  {"tstamp": 1722367117.2478, "task_type": "sts", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722367116.9614, "finish": 1722367117.2478, "ip": "", "conv_id": "c9c382e420cf4cf0b6d571b54b35ee29", "model_name": "intfloat/multilingual-e5-large-instruct", "txt0": "Five women wearing red formal ball gowns are standing together.", "txt1": "Five women with red and black halter tops and red and black miniskirts wearing red and white shoes.", "txt2": "A group of women are dressed alike.", "output": ""}
7
  {"tstamp": 1722367999.3196, "task_type": "sts", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722367999.283, "finish": 1722367999.3196, "ip": "", "conv_id": "2100918edf5b4578a5c41bb5464a31be", "model_name": "nomic-ai/nomic-embed-text-v1.5", "txt0": "They conduct personalized business or individual assessments to identify insurance needs.", "txt1": "Loweinsure.com conducts assessments for businesses and individuals to determine insurance needs.", "txt2": "The agency offers quotes and consultations to help clients find appropriate insurance.", "output": ""}
8
  {"tstamp": 1722367999.3196, "task_type": "sts", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722367999.283, "finish": 1722367999.3196, "ip": "", "conv_id": "75b4e5f48a7d41fe9ff27004b532c355", "model_name": "BAAI/bge-large-en-v1.5", "txt0": "They conduct personalized business or individual assessments to identify insurance needs.", "txt1": "Loweinsure.com conducts assessments for businesses and individuals to determine insurance needs.", "txt2": "The agency offers quotes and consultations to help clients find appropriate insurance.", "output": ""}
 
 
6
  {"tstamp": 1722367117.2478, "task_type": "sts", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722367116.9614, "finish": 1722367117.2478, "ip": "", "conv_id": "c9c382e420cf4cf0b6d571b54b35ee29", "model_name": "intfloat/multilingual-e5-large-instruct", "txt0": "Five women wearing red formal ball gowns are standing together.", "txt1": "Five women with red and black halter tops and red and black miniskirts wearing red and white shoes.", "txt2": "A group of women are dressed alike.", "output": ""}
7
  {"tstamp": 1722367999.3196, "task_type": "sts", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722367999.283, "finish": 1722367999.3196, "ip": "", "conv_id": "2100918edf5b4578a5c41bb5464a31be", "model_name": "nomic-ai/nomic-embed-text-v1.5", "txt0": "They conduct personalized business or individual assessments to identify insurance needs.", "txt1": "Loweinsure.com conducts assessments for businesses and individuals to determine insurance needs.", "txt2": "The agency offers quotes and consultations to help clients find appropriate insurance.", "output": ""}
8
  {"tstamp": 1722367999.3196, "task_type": "sts", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722367999.283, "finish": 1722367999.3196, "ip": "", "conv_id": "75b4e5f48a7d41fe9ff27004b532c355", "model_name": "BAAI/bge-large-en-v1.5", "txt0": "They conduct personalized business or individual assessments to identify insurance needs.", "txt1": "Loweinsure.com conducts assessments for businesses and individuals to determine insurance needs.", "txt2": "The agency offers quotes and consultations to help clients find appropriate insurance.", "output": ""}
9
+ {"tstamp": 1722370884.5626, "task_type": "sts", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722370884.5303, "finish": 1722370884.5626, "ip": "", "conv_id": "9abda6005ad84f4a822bfa4f413a39a2", "model_name": "GritLM/GritLM-7B", "txt0": "It is useful to outline the ordering and manufacturing processes on a weekly basis to see how this manufacturer's inventory policies might differ for the two different sizes.", "txt1": "The manufacturer always applies the same inventory policies for different sizes.", "txt2": "The manufacturer may sometimes have differing inventory policies for different sizes.", "output": ""}