Scheduled Commit
Browse files
data/retrieval_individual-c4bdd700-2c9c-4d9b-92d7-63fecd0c1575.jsonl
CHANGED
@@ -18,3 +18,5 @@
|
|
18 |
{"tstamp": 1722436817.7477, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722436816.0409, "finish": 1722436817.7477, "ip": "", "conv_id": "0452927ceec84ae3bff8ca5d80814734", "model_name": "embed-english-v3.0", "prompt": "What is a landmark paper that writes image captions with sentiments?", "output": [["What is a landmark paper that writes image captions with sentiments?", "Title: Image Captioning at Will: A Versatile Scheme for Effectively Injecting Sentiments into Image Descriptions\n\nAbstract: Automatic image captioning has recently approached human-level performance due to the latest advances in computer vision and natural language understanding. However, most of the current models can only generate plain factual descriptions about the content of a given image. However, for human beings, image caption writing is quite flexible and diverse, where additional language dimensions, such as emotion, humor and language styles, are often incorporated to produce diverse, emotional, or appealing captions. In particular, we are interested in generating sentiment-conveying image descriptions, which has received little attention. The main challenge is how to effectively inject sentiments into the generated captions without altering the semantic matching between the visual content and the generated descriptions. In this work, we propose two different models, which employ different schemes for injecting sentiments into image captions. Compared with the few existing approaches, the proposed models are much simpler and yet more effective. The experimental results show that our model outperform the state-of-the-art models in generating sentimental (i.e., sentiment-bearing) image captions. In addition, we can also easily manipulate the model by assigning different sentiments to the testing image to generate captions with the corresponding sentiments."]], "corpus": "arxiv"}
|
19 |
{"tstamp": 1722436845.5015, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722436845.0959, "finish": 1722436845.5015, "ip": "", "conv_id": "c3e05bfd38a54334966b3fee56c50372", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "What is a landmark paper that captions images with sentiments?", "output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "corpus": "arxiv"}
|
20 |
{"tstamp": 1722436845.5015, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722436845.0959, "finish": 1722436845.5015, "ip": "", "conv_id": "2c277a48c058469a99f7d8e871e56114", "model_name": "text-embedding-004", "prompt": "What is a landmark paper that captions images with sentiments?", "output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "corpus": "arxiv"}
|
|
|
|
|
|
18 |
{"tstamp": 1722436817.7477, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722436816.0409, "finish": 1722436817.7477, "ip": "", "conv_id": "0452927ceec84ae3bff8ca5d80814734", "model_name": "embed-english-v3.0", "prompt": "What is a landmark paper that writes image captions with sentiments?", "output": [["What is a landmark paper that writes image captions with sentiments?", "Title: Image Captioning at Will: A Versatile Scheme for Effectively Injecting Sentiments into Image Descriptions\n\nAbstract: Automatic image captioning has recently approached human-level performance due to the latest advances in computer vision and natural language understanding. However, most of the current models can only generate plain factual descriptions about the content of a given image. However, for human beings, image caption writing is quite flexible and diverse, where additional language dimensions, such as emotion, humor and language styles, are often incorporated to produce diverse, emotional, or appealing captions. In particular, we are interested in generating sentiment-conveying image descriptions, which has received little attention. The main challenge is how to effectively inject sentiments into the generated captions without altering the semantic matching between the visual content and the generated descriptions. In this work, we propose two different models, which employ different schemes for injecting sentiments into image captions. Compared with the few existing approaches, the proposed models are much simpler and yet more effective. The experimental results show that our model outperform the state-of-the-art models in generating sentimental (i.e., sentiment-bearing) image captions. In addition, we can also easily manipulate the model by assigning different sentiments to the testing image to generate captions with the corresponding sentiments."]], "corpus": "arxiv"}
|
19 |
{"tstamp": 1722436845.5015, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722436845.0959, "finish": 1722436845.5015, "ip": "", "conv_id": "c3e05bfd38a54334966b3fee56c50372", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "What is a landmark paper that captions images with sentiments?", "output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "corpus": "arxiv"}
|
20 |
{"tstamp": 1722436845.5015, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722436845.0959, "finish": 1722436845.5015, "ip": "", "conv_id": "2c277a48c058469a99f7d8e871e56114", "model_name": "text-embedding-004", "prompt": "What is a landmark paper that captions images with sentiments?", "output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "corpus": "arxiv"}
|
21 |
+
{"tstamp": 1722437856.3717, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722437855.6019, "finish": 1722437856.3717, "ip": "", "conv_id": "720939dccd164266b3656f02d8a865ea", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "output": [["Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "Title: A Large-Scale Chinese Short-Text Conversation Dataset\n\nAbstract: The advancements of neural dialogue generation models show promising results on modeling short-text conversations. However, training such models usually needs a large-scale high-quality dialogue corpus, which is hard to access. In this paper, we present a large-scale cleaned Chinese conversation dataset, LCCC, which contains a base version (6.8million dialogues) and a large version (12.0 million dialogues). The quality of our dataset is ensured by a rigorous data cleaning pipeline, which is built based on a set of rules and a classifier that is trained on manually annotated 110K dialogue pairs. We also release pre-training dialogue models which are trained on LCCC-base and LCCC-large respectively. The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling. All the models and datasets are available at https://github.com/thu-coai/CDial-GPT."]], "corpus": "arxiv"}
|
22 |
+
{"tstamp": 1722437856.3717, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722437855.6019, "finish": 1722437856.3717, "ip": "", "conv_id": "7fce68dfaa284fc8be2fb939013f9b3b", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "output": [["Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "Title: A Large-Scale Chinese Short-Text Conversation Dataset\n\nAbstract: The advancements of neural dialogue generation models show promising results on modeling short-text conversations. However, training such models usually needs a large-scale high-quality dialogue corpus, which is hard to access. In this paper, we present a large-scale cleaned Chinese conversation dataset, LCCC, which contains a base version (6.8million dialogues) and a large version (12.0 million dialogues). The quality of our dataset is ensured by a rigorous data cleaning pipeline, which is built based on a set of rules and a classifier that is trained on manually annotated 110K dialogue pairs. We also release pre-training dialogue models which are trained on LCCC-base and LCCC-large respectively. The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling. All the models and datasets are available at https://github.com/thu-coai/CDial-GPT."]], "corpus": "arxiv"}
|