Scheduled Commit
Browse files
data/retrieval_battle-81269da8-6dc7-417a-8e0a-245c4a753482.jsonl
CHANGED
@@ -31,3 +31,4 @@
|
|
31 |
{"tstamp": 1722385287.1562, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "40cbc8e2f6bf4dc6ba1f9a2a9d21eefe", "0_model_name": "voyage-multilingual-2", "0_prompt": "What is the meaning of life? ", "0_output": [["What is the meaning of life? ", "Meaning of life\n\nTo be fruitful and multiply. ()\nTo obtain freedom. ()\nTo fill the Earth and subdue it. ()\nTo serve humankind, to prepare to meet and become more like God, to choose good over evil, and have joy.\n斯He is the One撕 Who created death and life in order to test which of you is best in deeds. And He is the Almighty, All-Forgiving. 鈥擻nTo worship God and enter heaven in afterlife.\nTo love, to feel, to enjoy the act of living\nTo love more.\nTo love those who mean the most. Every life you touch will touch you back.\nTo treasure every enjoyable sensation one has.\nTo seek beauty in all its forms.\nTo have fun or enjoy life.\nTo seek pleasure and avoid pain.\nTo be compassionate.\nTo be moved by the tears and pain of others, and try to help them out of love and compassion.\nTo love others as best we possibly can.\nTo eat, drink, and be merry.\nTo have power, to be better\nTo strive for power and superiority.\nTo rule the world.\nTo know and master the world.\nTo know and master nature.\nTo help life become as powerful as possible."]], "0_corpus": "wikipedia", "1_conv_id": "0c6543fff61e41939f5206b477ffe187", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "What is the meaning of life? ", "1_output": [["What is the meaning of life? ", "Meaning of life\n\nThe meaning of life pertains to the inherent significance or philosophical meaning of living (or existence in general). There is not a definitive answer, and thinking or discourse on the topic is sought in the English language through the question, \"What is the meaning of life?\" (or the related \"Why are we here?\" or \"What is the purpose of existence?\"). There have been many proposed answers to these questions from many different cultural and ideological backgrounds. The search for life's meaning has produced much philosophical, scientific, theological, and metaphysical speculation throughout history. Different people and cultures believe different things for the answer to this question. Opinions vary on the usefulness of using time and resources in the pursuit of an answer. Excessive pondering can be indicative of, or lead to, an existential crisis.\nThe meaning of life can be derived from philosophical and religious contemplation of, and scientific inquiries about, existence, social ties, consciousness, and happiness. Many other issues are also involved, such as symbolic meaning, ontology, value, purpose, ethics, good and evil, free will, the existence of one or multiple gods, conceptions of God, the soul, and the afterlife. Scientific contributions focus primarily on describing related empirical facts about the universe, exploring the context and parameters concerning the \"how\" of life. Science also studies and can provide recommendations for the pursuit of well-being and a related conception of morality. An alternative, humanistic approach poses the question, \"What is the meaning of my life?\""]], "1_corpus": "wikipedia"}
|
32 |
{"tstamp": 1722385900.9648, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "62bf4d34067d48fab317ab0ef26dcc9c", "0_model_name": "BM25", "0_prompt": "how to write a good abstract", "0_output": [["how to write a good abstract", "Tex Stackexchange\n\nQ: Missing abstract with amsart class I'm not very good at LaTeX but I've been using it to write my dissertation. Have just come to the end and trying to format everything correctly. Whenever I try to put my Abstract paragraph into the form \\begin{abstract} (writing here) \\end{abstract} and click play the whole abstract section just disappears. What am I doing wrong?\n\\documentclass[12pt]{amsart}\n\n\\begin{document}\n\n\\begin{abstract}\n(my abstract here)\n\\end{abstract}\n\\end{document}\n\n\nA: The amsart class does some Mysterious Stuff with abstracts: it stores the abstract in a box to be used in the title page. To get it to appear, issue the magic command \\maketitle after the abstract.\n(It will then complain about a missing \\title ... )\n\\documentclass[12pt]{amsart}\n\n\n\\begin{document}\n\n\\begin{abstract}\n(my abstract here)\n\\end{abstract}\n\n\\maketitle\nSome text\n\\end{document}\n\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "81d3d31b53db4678a92dea8dedda3e3f", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "how to write a good abstract", "1_output": [["how to write a good abstract", "Mathoverflow Stackexchange\n\nQ: How to write an abstract for a math paper? How would you go about writing an abstract for a Math paper? I know that an abstract is supposed to \"advertise\" the paper. However, I do not really know how to get started. Could someone tell me how they go about writing an abstract?\n\nA: *\n\n*Avoid notation if possible. Notation makes it really hard to search electronically.\n\n*Put the subject in context, e.g., \"In a recent paper, T. Lehrer introduced the concept of left-bifurcled rectangles. He conjectured no such rectangles exist when the number of bifurcles $n$ is odd.\"\n\n*State your results, in non-technical language, if possible. \"In this paper we show the existence of left-bifurcled rectangles for all prime $n$.\"\n\n*Mention a technique, if there is a new one: \"Our methods involve analytic and algebraic topology of locally euclidean metrizations of infinitely differentiable Riemannian manifolds\".\n\n*Never, ever, ever, cite papers in the bibliography by giving citation numbers; the abstract is an independent entity that should stand on its own.\n"]], "1_corpus": "stackexchange"}
|
33 |
{"tstamp": 1722389811.1554, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "d85582bd3b174bf2ad9cd3445a06139b", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Who are the authors of the original vae paper ", "0_output": [["Who are the authors of the original vae paper ", "Title: GAN and VAE from an Optimal Transport Point of View\n\nAbstract: This short article revisits some of the ideas introduced in arXiv:1701.07875 and arXiv:1705.07642 in a simple setup. This sheds some lights on the connexions between Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Minimum Kantorovitch Estimators (MKE)."]], "0_corpus": "arxiv", "1_conv_id": "c19cebcaeafc4bdc912783fa3ba27b24", "1_model_name": "text-embedding-3-large", "1_prompt": "Who are the authors of the original vae paper ", "1_output": [["Who are the authors of the original vae paper ", "Title: AVAE: Adversarial Variational Auto Encoder\n\nAbstract: Among the wide variety of image generative models, two models stand out: Variational Auto Encoders (VAE) and Generative Adversarial Networks (GAN). GANs can produce realistic images, but they suffer from mode collapse and do not provide simple ways to get the latent representation of an image. On the other hand, VAEs do not have these problems, but they often generate images less realistic than GANs. In this article, we explain that this lack of realism is partially due to a common underestimation of the natural image manifold dimensionality. To solve this issue we introduce a new framework that combines VAE and GAN in a novel and complementary way to produce an auto-encoding model that keeps VAEs properties while generating images of GAN-quality. We evaluate our approach both qualitatively and quantitatively on five image datasets."]], "1_corpus": "arxiv"}
|
|
|
|
31 |
{"tstamp": 1722385287.1562, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "40cbc8e2f6bf4dc6ba1f9a2a9d21eefe", "0_model_name": "voyage-multilingual-2", "0_prompt": "What is the meaning of life? ", "0_output": [["What is the meaning of life? ", "Meaning of life\n\nTo be fruitful and multiply. ()\nTo obtain freedom. ()\nTo fill the Earth and subdue it. ()\nTo serve humankind, to prepare to meet and become more like God, to choose good over evil, and have joy.\n斯He is the One撕 Who created death and life in order to test which of you is best in deeds. And He is the Almighty, All-Forgiving. 鈥擻nTo worship God and enter heaven in afterlife.\nTo love, to feel, to enjoy the act of living\nTo love more.\nTo love those who mean the most. Every life you touch will touch you back.\nTo treasure every enjoyable sensation one has.\nTo seek beauty in all its forms.\nTo have fun or enjoy life.\nTo seek pleasure and avoid pain.\nTo be compassionate.\nTo be moved by the tears and pain of others, and try to help them out of love and compassion.\nTo love others as best we possibly can.\nTo eat, drink, and be merry.\nTo have power, to be better\nTo strive for power and superiority.\nTo rule the world.\nTo know and master the world.\nTo know and master nature.\nTo help life become as powerful as possible."]], "0_corpus": "wikipedia", "1_conv_id": "0c6543fff61e41939f5206b477ffe187", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "What is the meaning of life? ", "1_output": [["What is the meaning of life? ", "Meaning of life\n\nThe meaning of life pertains to the inherent significance or philosophical meaning of living (or existence in general). There is not a definitive answer, and thinking or discourse on the topic is sought in the English language through the question, \"What is the meaning of life?\" (or the related \"Why are we here?\" or \"What is the purpose of existence?\"). There have been many proposed answers to these questions from many different cultural and ideological backgrounds. The search for life's meaning has produced much philosophical, scientific, theological, and metaphysical speculation throughout history. Different people and cultures believe different things for the answer to this question. Opinions vary on the usefulness of using time and resources in the pursuit of an answer. Excessive pondering can be indicative of, or lead to, an existential crisis.\nThe meaning of life can be derived from philosophical and religious contemplation of, and scientific inquiries about, existence, social ties, consciousness, and happiness. Many other issues are also involved, such as symbolic meaning, ontology, value, purpose, ethics, good and evil, free will, the existence of one or multiple gods, conceptions of God, the soul, and the afterlife. Scientific contributions focus primarily on describing related empirical facts about the universe, exploring the context and parameters concerning the \"how\" of life. Science also studies and can provide recommendations for the pursuit of well-being and a related conception of morality. An alternative, humanistic approach poses the question, \"What is the meaning of my life?\""]], "1_corpus": "wikipedia"}
|
32 |
{"tstamp": 1722385900.9648, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "62bf4d34067d48fab317ab0ef26dcc9c", "0_model_name": "BM25", "0_prompt": "how to write a good abstract", "0_output": [["how to write a good abstract", "Tex Stackexchange\n\nQ: Missing abstract with amsart class I'm not very good at LaTeX but I've been using it to write my dissertation. Have just come to the end and trying to format everything correctly. Whenever I try to put my Abstract paragraph into the form \\begin{abstract} (writing here) \\end{abstract} and click play the whole abstract section just disappears. What am I doing wrong?\n\\documentclass[12pt]{amsart}\n\n\\begin{document}\n\n\\begin{abstract}\n(my abstract here)\n\\end{abstract}\n\\end{document}\n\n\nA: The amsart class does some Mysterious Stuff with abstracts: it stores the abstract in a box to be used in the title page. To get it to appear, issue the magic command \\maketitle after the abstract.\n(It will then complain about a missing \\title ... )\n\\documentclass[12pt]{amsart}\n\n\n\\begin{document}\n\n\\begin{abstract}\n(my abstract here)\n\\end{abstract}\n\n\\maketitle\nSome text\n\\end{document}\n\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "81d3d31b53db4678a92dea8dedda3e3f", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "how to write a good abstract", "1_output": [["how to write a good abstract", "Mathoverflow Stackexchange\n\nQ: How to write an abstract for a math paper? How would you go about writing an abstract for a Math paper? I know that an abstract is supposed to \"advertise\" the paper. However, I do not really know how to get started. Could someone tell me how they go about writing an abstract?\n\nA: *\n\n*Avoid notation if possible. Notation makes it really hard to search electronically.\n\n*Put the subject in context, e.g., \"In a recent paper, T. Lehrer introduced the concept of left-bifurcled rectangles. He conjectured no such rectangles exist when the number of bifurcles $n$ is odd.\"\n\n*State your results, in non-technical language, if possible. \"In this paper we show the existence of left-bifurcled rectangles for all prime $n$.\"\n\n*Mention a technique, if there is a new one: \"Our methods involve analytic and algebraic topology of locally euclidean metrizations of infinitely differentiable Riemannian manifolds\".\n\n*Never, ever, ever, cite papers in the bibliography by giving citation numbers; the abstract is an independent entity that should stand on its own.\n"]], "1_corpus": "stackexchange"}
|
33 |
{"tstamp": 1722389811.1554, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "d85582bd3b174bf2ad9cd3445a06139b", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Who are the authors of the original vae paper ", "0_output": [["Who are the authors of the original vae paper ", "Title: GAN and VAE from an Optimal Transport Point of View\n\nAbstract: This short article revisits some of the ideas introduced in arXiv:1701.07875 and arXiv:1705.07642 in a simple setup. This sheds some lights on the connexions between Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Minimum Kantorovitch Estimators (MKE)."]], "0_corpus": "arxiv", "1_conv_id": "c19cebcaeafc4bdc912783fa3ba27b24", "1_model_name": "text-embedding-3-large", "1_prompt": "Who are the authors of the original vae paper ", "1_output": [["Who are the authors of the original vae paper ", "Title: AVAE: Adversarial Variational Auto Encoder\n\nAbstract: Among the wide variety of image generative models, two models stand out: Variational Auto Encoders (VAE) and Generative Adversarial Networks (GAN). GANs can produce realistic images, but they suffer from mode collapse and do not provide simple ways to get the latent representation of an image. On the other hand, VAEs do not have these problems, but they often generate images less realistic than GANs. In this article, we explain that this lack of realism is partially due to a common underestimation of the natural image manifold dimensionality. To solve this issue we introduce a new framework that combines VAE and GAN in a novel and complementary way to produce an auto-encoding model that keeps VAEs properties while generating images of GAN-quality. We evaluate our approach both qualitatively and quantitatively on five image datasets."]], "1_corpus": "arxiv"}
|
34 |
+
{"tstamp": 1722389883.6233, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "16a2429e632846798e0182b7b5c57d67", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "What is xpool cross modal language video attention", "0_output": [["What is xpool cross modal language video attention", "Title: B\\\"acklund--Darboux transformations in Sato's Grassmannian\n\nAbstract: We define B\\\"acklund--Darboux transformations in Sato's Grassmannian. They can be regarded as Darboux transformations on maximal algebras of commuting ordinary differential operators. We describe the action of these transformations on related objects: wave functions, tau-functions and spectral algebras. This paper is the second of a series of papers (hep-th/9510211, q-alg/9602011, q-alg/9602012) on the bispectral problem."]], "0_corpus": "arxiv", "1_conv_id": "7569a1c8bfe4409ba6e210fa06fa25d2", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "What is xpool cross modal language video attention", "1_output": [["What is xpool cross modal language video attention", "Title: X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval\n\nAbstract: In text-video retrieval, the objective is to learn a cross-modal similarity function between a text and a video that ranks relevant text-video pairs higher than irrelevant pairs. However, videos inherently express a much wider gamut of information than texts. Instead, texts often capture sub-regions of entire videos and are most semantically similar to certain frames within videos. Therefore, for a given text, a retrieval model should focus on the text's most semantically similar video sub-regions to make a more relevant comparison. Yet, most existing works aggregate entire videos without directly considering text. Common text-agnostic aggregations schemes include mean-pooling or self-attention over the frames, but these are likely to encode misleading visual information not described in the given text. To address this, we propose a cross-modal attention model called X-Pool that reasons between a text and the frames of a video. Our core mechanism is a scaled dot product attention for a text to attend to its most semantically similar frames. We then generate an aggregated video representation conditioned on the text's attention weights over the frames. We evaluate our method on three benchmark datasets of MSR-VTT, MSVD and LSMDC, achieving new state-of-the-art results by up to 12% in relative improvement in Recall@1. Our findings thereby highlight the importance of joint text-video reasoning to extract important visual cues according to text. Full code and demo can be found at: https://layer6ai-labs.github.io/xpool/"]], "1_corpus": "arxiv"}
|
data/retrieval_individual-81269da8-6dc7-417a-8e0a-245c4a753482.jsonl
CHANGED
@@ -74,3 +74,5 @@
|
|
74 |
{"tstamp": 1722385952.2103, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722385949.9561, "finish": 1722385952.2103, "ip": "", "conv_id": "13aca6a147054459a7ad0f23455c0359", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "how to write a good abstract", "output": [["how to write a good abstract", "Abstract (summary)\n\nAn abstract is a brief summary of a research article, thesis, review, conference proceeding, or any in-depth analysis of a particular subject and is often used to help the reader quickly ascertain the paper's purpose. When used, an abstract always appears at the beginning of a manuscript or typescript, acting as the point-of-entry for any given academic paper or patent application. Abstracting and indexing services for various academic disciplines are aimed at compiling a body of literature for that particular subject."]], "corpus": "wikipedia"}
|
75 |
{"tstamp": 1722389761.8838, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722389761.3151, "finish": 1722389761.8838, "ip": "", "conv_id": "d85582bd3b174bf2ad9cd3445a06139b", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Who are the authors of the original vae paper ", "output": [["Who are the authors of the original vae paper ", "Title: GAN and VAE from an Optimal Transport Point of View\n\nAbstract: This short article revisits some of the ideas introduced in arXiv:1701.07875 and arXiv:1705.07642 in a simple setup. This sheds some lights on the connexions between Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Minimum Kantorovitch Estimators (MKE)."]], "corpus": "arxiv"}
|
76 |
{"tstamp": 1722389761.8838, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722389761.3151, "finish": 1722389761.8838, "ip": "", "conv_id": "c19cebcaeafc4bdc912783fa3ba27b24", "model_name": "text-embedding-3-large", "prompt": "Who are the authors of the original vae paper ", "output": [["Who are the authors of the original vae paper ", "Title: AVAE: Adversarial Variational Auto Encoder\n\nAbstract: Among the wide variety of image generative models, two models stand out: Variational Auto Encoders (VAE) and Generative Adversarial Networks (GAN). GANs can produce realistic images, but they suffer from mode collapse and do not provide simple ways to get the latent representation of an image. On the other hand, VAEs do not have these problems, but they often generate images less realistic than GANs. In this article, we explain that this lack of realism is partially due to a common underestimation of the natural image manifold dimensionality. To solve this issue we introduce a new framework that combines VAE and GAN in a novel and complementary way to produce an auto-encoding model that keeps VAEs properties while generating images of GAN-quality. We evaluate our approach both qualitatively and quantitatively on five image datasets."]], "corpus": "arxiv"}
|
|
|
|
|
|
74 |
{"tstamp": 1722385952.2103, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722385949.9561, "finish": 1722385952.2103, "ip": "", "conv_id": "13aca6a147054459a7ad0f23455c0359", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "how to write a good abstract", "output": [["how to write a good abstract", "Abstract (summary)\n\nAn abstract is a brief summary of a research article, thesis, review, conference proceeding, or any in-depth analysis of a particular subject and is often used to help the reader quickly ascertain the paper's purpose. When used, an abstract always appears at the beginning of a manuscript or typescript, acting as the point-of-entry for any given academic paper or patent application. Abstracting and indexing services for various academic disciplines are aimed at compiling a body of literature for that particular subject."]], "corpus": "wikipedia"}
|
75 |
{"tstamp": 1722389761.8838, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722389761.3151, "finish": 1722389761.8838, "ip": "", "conv_id": "d85582bd3b174bf2ad9cd3445a06139b", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Who are the authors of the original vae paper ", "output": [["Who are the authors of the original vae paper ", "Title: GAN and VAE from an Optimal Transport Point of View\n\nAbstract: This short article revisits some of the ideas introduced in arXiv:1701.07875 and arXiv:1705.07642 in a simple setup. This sheds some lights on the connexions between Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Minimum Kantorovitch Estimators (MKE)."]], "corpus": "arxiv"}
|
76 |
{"tstamp": 1722389761.8838, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722389761.3151, "finish": 1722389761.8838, "ip": "", "conv_id": "c19cebcaeafc4bdc912783fa3ba27b24", "model_name": "text-embedding-3-large", "prompt": "Who are the authors of the original vae paper ", "output": [["Who are the authors of the original vae paper ", "Title: AVAE: Adversarial Variational Auto Encoder\n\nAbstract: Among the wide variety of image generative models, two models stand out: Variational Auto Encoders (VAE) and Generative Adversarial Networks (GAN). GANs can produce realistic images, but they suffer from mode collapse and do not provide simple ways to get the latent representation of an image. On the other hand, VAEs do not have these problems, but they often generate images less realistic than GANs. In this article, we explain that this lack of realism is partially due to a common underestimation of the natural image manifold dimensionality. To solve this issue we introduce a new framework that combines VAE and GAN in a novel and complementary way to produce an auto-encoding model that keeps VAEs properties while generating images of GAN-quality. We evaluate our approach both qualitatively and quantitatively on five image datasets."]], "corpus": "arxiv"}
|
77 |
+
{"tstamp": 1722389869.8314, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722389869.5011, "finish": 1722389869.8314, "ip": "", "conv_id": "16a2429e632846798e0182b7b5c57d67", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "What is xpool cross modal language video attention", "output": [["What is xpool cross modal language video attention", "Title: B\\\"acklund--Darboux transformations in Sato's Grassmannian\n\nAbstract: We define B\\\"acklund--Darboux transformations in Sato's Grassmannian. They can be regarded as Darboux transformations on maximal algebras of commuting ordinary differential operators. We describe the action of these transformations on related objects: wave functions, tau-functions and spectral algebras. This paper is the second of a series of papers (hep-th/9510211, q-alg/9602011, q-alg/9602012) on the bispectral problem."]], "corpus": "arxiv"}
|
78 |
+
{"tstamp": 1722389869.8314, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722389869.5011, "finish": 1722389869.8314, "ip": "", "conv_id": "7569a1c8bfe4409ba6e210fa06fa25d2", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "What is xpool cross modal language video attention", "output": [["What is xpool cross modal language video attention", "Title: X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval\n\nAbstract: In text-video retrieval, the objective is to learn a cross-modal similarity function between a text and a video that ranks relevant text-video pairs higher than irrelevant pairs. However, videos inherently express a much wider gamut of information than texts. Instead, texts often capture sub-regions of entire videos and are most semantically similar to certain frames within videos. Therefore, for a given text, a retrieval model should focus on the text's most semantically similar video sub-regions to make a more relevant comparison. Yet, most existing works aggregate entire videos without directly considering text. Common text-agnostic aggregations schemes include mean-pooling or self-attention over the frames, but these are likely to encode misleading visual information not described in the given text. To address this, we propose a cross-modal attention model called X-Pool that reasons between a text and the frames of a video. Our core mechanism is a scaled dot product attention for a text to attend to its most semantically similar frames. We then generate an aggregated video representation conditioned on the text's attention weights over the frames. We evaluate our method on three benchmark datasets of MSR-VTT, MSVD and LSMDC, achieving new state-of-the-art results by up to 12% in relative improvement in Recall@1. Our findings thereby highlight the importance of joint text-video reasoning to extract important visual cues according to text. Full code and demo can be found at: https://layer6ai-labs.github.io/xpool/"]], "corpus": "arxiv"}
|