Scheduled Commit
Browse files
data/retrieval_battle-1948e9c4-e613-4487-b12e-0e988013b994.jsonl
CHANGED
@@ -77,3 +77,7 @@
|
|
77 |
{"tstamp": 1722729343.936, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "f23978abb2a74884a4ba29e0ba8734e5", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "how to transfer files from android to windows 8?", "0_output": [["how to transfer files from android to windows 8?", "Stackoverflow Stackexchange\n\nQ: How dose nginx get the value of $http_upgrade In this nginx doc, a variable $http_upgrade is used in nginx conf file, while this variable is not found in ngnix var index, so it's probably not a built-in variable, then how nginx know its value.\n\nA: The $http_upgrade use the value from client header upgrade, in nginx conf, $http_HEADER get the HEADER from client.\n"]], "0_corpus": "stackexchange", "1_conv_id": "9708da2f48f44a2f978c8ed8182c903f", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "how to transfer files from android to windows 8?", "1_output": [["how to transfer files from android to windows 8?", "Superuser Stackexchange\n\nQ: Files from Android to Windows 8.1 pc i would like to transfer some pictures from android phone to my windows 8.1 laptop over a bluetooth, but even when i paired the devices i don't have on the bt icon in windows buttons to send or recieve file (there is only Add device, show devices, join network, open settings, remove icon), i don't know what to do, because when i choose in phone to send the files, it says error, computer rejected or something like that. Can anybody help me what should i do? Thank You. Frank.\n\nA: You can also get the icon to show in the status area using the instructions here: Open Bluetooth file transfer wizard in Windows 8.\n"]], "1_corpus": "stackexchange"}
|
78 |
{"tstamp": 1722729360.6136, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "bc4362510fa04b58a90ace447166e0d7", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Looking for a new comprehensive benchmark for evaluating text embeddings in Scandinavian languages.", "0_output": [["Looking for a new comprehensive benchmark for evaluating text embeddings in Scandinavian languages.", "Title: The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding\n\nAbstract: The evaluation of English text embeddings has transitioned from evaluating a handful of datasets to broad coverage across many tasks through benchmarks such as MTEB. However, this is not the case for multilingual text embeddings due to a lack of available benchmarks. To address this problem, we introduce the Scandinavian Embedding Benchmark (SEB). SEB is a comprehensive framework that enables text embedding evaluation for Scandinavian languages across 24 tasks, 10 subtasks, and 4 task categories. Building on SEB, we evaluate more than 26 models, uncovering significant performance disparities between public and commercial solutions not previously captured by MTEB. We open-source SEB and integrate it with MTEB, thus bridging the text embedding evaluation gap for Scandinavian languages."]], "0_corpus": "arxiv", "1_conv_id": "97c3b25d694444af9c220e171a93150c", "1_model_name": "BM25", "1_prompt": "Looking for a new comprehensive benchmark for evaluating text embeddings in Scandinavian languages.", "1_output": [["Looking for a new comprehensive benchmark for evaluating text embeddings in Scandinavian languages.", "Title: The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding\n\nAbstract: The evaluation of English text embeddings has transitioned from evaluating a handful of datasets to broad coverage across many tasks through benchmarks such as MTEB. However, this is not the case for multilingual text embeddings due to a lack of available benchmarks. To address this problem, we introduce the Scandinavian Embedding Benchmark (SEB). SEB is a comprehensive framework that enables text embedding evaluation for Scandinavian languages across 24 tasks, 10 subtasks, and 4 task categories. Building on SEB, we evaluate more than 26 models, uncovering significant performance disparities between public and commercial solutions not previously captured by MTEB. We open-source SEB and integrate it with MTEB, thus bridging the text embedding evaluation gap for Scandinavian languages."]], "1_corpus": "arxiv"}
|
79 |
{"tstamp": 1722729388.7118, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "2b9951051f5b47c0a552f3cba7dcd546", "0_model_name": "BM25", "0_prompt": "how to make facebook profile picture not likeable?", "0_output": [["how to make facebook profile picture not likeable?", "Stackoverflow Stackexchange\n\nQ: How to get facebook profile large square picture ? Facebook profile picture is public available. \nFor example https://graph.facebook.com/oleg.dats/picture It's square but to small \nTo get larger picture https://graph.facebook.com/oleg.dats/picture?type=large But this one is not square. \nIs there way to get square profile picture (200px/200px) ? \n\nA: You can use this to get the profile picture of any size: \nhttps://graph.facebook.com/digitizor/picture?width=700&height=200\nThis shows the profile picture of facebook.com/digitizor in 700px by 200px\nChange dimensions and facebook profile name as per your requirements\nhttp://digitizor.com/2012/09/15/graphi-api-profile-picture/\n"]], "0_corpus": "stackexchange", "1_conv_id": "3cfb23b7dd814204a9f1df0ca3860820", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "how to make facebook profile picture not likeable?", "1_output": [["how to make facebook profile picture not likeable?", "Stackoverflow Stackexchange\n\nQ: Disabling comments or likes on a specific post made via the Facebook API I have an app that posts a message to a person's Facebook Timeline / wall.\nIs there a way to disable commenting or likes on that post via the Graph API?\n\nA: No that's not possible with graph api! \n"]], "1_corpus": "stackexchange"}
|
|
|
|
|
|
|
|
|
|
77 |
{"tstamp": 1722729343.936, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "f23978abb2a74884a4ba29e0ba8734e5", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "how to transfer files from android to windows 8?", "0_output": [["how to transfer files from android to windows 8?", "Stackoverflow Stackexchange\n\nQ: How dose nginx get the value of $http_upgrade In this nginx doc, a variable $http_upgrade is used in nginx conf file, while this variable is not found in ngnix var index, so it's probably not a built-in variable, then how nginx know its value.\n\nA: The $http_upgrade use the value from client header upgrade, in nginx conf, $http_HEADER get the HEADER from client.\n"]], "0_corpus": "stackexchange", "1_conv_id": "9708da2f48f44a2f978c8ed8182c903f", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "how to transfer files from android to windows 8?", "1_output": [["how to transfer files from android to windows 8?", "Superuser Stackexchange\n\nQ: Files from Android to Windows 8.1 pc i would like to transfer some pictures from android phone to my windows 8.1 laptop over a bluetooth, but even when i paired the devices i don't have on the bt icon in windows buttons to send or recieve file (there is only Add device, show devices, join network, open settings, remove icon), i don't know what to do, because when i choose in phone to send the files, it says error, computer rejected or something like that. Can anybody help me what should i do? Thank You. Frank.\n\nA: You can also get the icon to show in the status area using the instructions here: Open Bluetooth file transfer wizard in Windows 8.\n"]], "1_corpus": "stackexchange"}
|
78 |
{"tstamp": 1722729360.6136, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "bc4362510fa04b58a90ace447166e0d7", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Looking for a new comprehensive benchmark for evaluating text embeddings in Scandinavian languages.", "0_output": [["Looking for a new comprehensive benchmark for evaluating text embeddings in Scandinavian languages.", "Title: The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding\n\nAbstract: The evaluation of English text embeddings has transitioned from evaluating a handful of datasets to broad coverage across many tasks through benchmarks such as MTEB. However, this is not the case for multilingual text embeddings due to a lack of available benchmarks. To address this problem, we introduce the Scandinavian Embedding Benchmark (SEB). SEB is a comprehensive framework that enables text embedding evaluation for Scandinavian languages across 24 tasks, 10 subtasks, and 4 task categories. Building on SEB, we evaluate more than 26 models, uncovering significant performance disparities between public and commercial solutions not previously captured by MTEB. We open-source SEB and integrate it with MTEB, thus bridging the text embedding evaluation gap for Scandinavian languages."]], "0_corpus": "arxiv", "1_conv_id": "97c3b25d694444af9c220e171a93150c", "1_model_name": "BM25", "1_prompt": "Looking for a new comprehensive benchmark for evaluating text embeddings in Scandinavian languages.", "1_output": [["Looking for a new comprehensive benchmark for evaluating text embeddings in Scandinavian languages.", "Title: The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding\n\nAbstract: The evaluation of English text embeddings has transitioned from evaluating a handful of datasets to broad coverage across many tasks through benchmarks such as MTEB. However, this is not the case for multilingual text embeddings due to a lack of available benchmarks. To address this problem, we introduce the Scandinavian Embedding Benchmark (SEB). SEB is a comprehensive framework that enables text embedding evaluation for Scandinavian languages across 24 tasks, 10 subtasks, and 4 task categories. Building on SEB, we evaluate more than 26 models, uncovering significant performance disparities between public and commercial solutions not previously captured by MTEB. We open-source SEB and integrate it with MTEB, thus bridging the text embedding evaluation gap for Scandinavian languages."]], "1_corpus": "arxiv"}
|
79 |
{"tstamp": 1722729388.7118, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "2b9951051f5b47c0a552f3cba7dcd546", "0_model_name": "BM25", "0_prompt": "how to make facebook profile picture not likeable?", "0_output": [["how to make facebook profile picture not likeable?", "Stackoverflow Stackexchange\n\nQ: How to get facebook profile large square picture ? Facebook profile picture is public available. \nFor example https://graph.facebook.com/oleg.dats/picture It's square but to small \nTo get larger picture https://graph.facebook.com/oleg.dats/picture?type=large But this one is not square. \nIs there way to get square profile picture (200px/200px) ? \n\nA: You can use this to get the profile picture of any size: \nhttps://graph.facebook.com/digitizor/picture?width=700&height=200\nThis shows the profile picture of facebook.com/digitizor in 700px by 200px\nChange dimensions and facebook profile name as per your requirements\nhttp://digitizor.com/2012/09/15/graphi-api-profile-picture/\n"]], "0_corpus": "stackexchange", "1_conv_id": "3cfb23b7dd814204a9f1df0ca3860820", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "how to make facebook profile picture not likeable?", "1_output": [["how to make facebook profile picture not likeable?", "Stackoverflow Stackexchange\n\nQ: Disabling comments or likes on a specific post made via the Facebook API I have an app that posts a message to a person's Facebook Timeline / wall.\nIs there a way to disable commenting or likes on that post via the Graph API?\n\nA: No that's not possible with graph api! \n"]], "1_corpus": "stackexchange"}
|
80 |
+
{"tstamp": 1722729433.5463, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "39ec4e24a1a343e7afbe8166d494e8fe", "0_model_name": "BM25", "0_prompt": "is the cmos battery rechargeable?", "0_output": [["is the cmos battery rechargeable?", "Superuser Stackexchange\n\nQ: Is a CMOS battery rechargeable? Is a CMOS battery rechargeable? If so, what does its charger look like?\n\nA: Some motherboards used to have rechargable CMOS batteries on them.\nIn general now, no CMOS battery (especially the silver button cell type) are rechargable without investing in a specific rechargable version.\nDo not - I repeat - DO NOT attempt to recharge a non-rechargable lithium cell (button type) as the resultant explosion and noxious gasses could kill you (or worse).\n"]], "0_corpus": "stackexchange", "1_conv_id": "2c1f7f23b4f94eaab20ead1899cc5b12", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "is the cmos battery rechargeable?", "1_output": [["is the cmos battery rechargeable?", "Superuser Stackexchange\n\nQ: Is a CMOS battery rechargeable? Is a CMOS battery rechargeable? If so, what does its charger look like?\n\nA: Some motherboards used to have rechargable CMOS batteries on them.\nIn general now, no CMOS battery (especially the silver button cell type) are rechargable without investing in a specific rechargable version.\nDo not - I repeat - DO NOT attempt to recharge a non-rechargable lithium cell (button type) as the resultant explosion and noxious gasses could kill you (or worse).\n"]], "1_corpus": "stackexchange"}
|
81 |
+
{"tstamp": 1722729454.4405, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "a07ee681e4ce4b77adb6eace147435d1", "0_model_name": "embed-english-v3.0", "0_prompt": "A novel approach in image captioning that integrates DETR-based detection with dual visual features.", "0_output": [["A novel approach in image captioning that integrates DETR-based detection with dual visual features.", "Title: Dual-Level Collaborative Transformer for Image Captioning\n\nAbstract: Descriptive region features extracted by object detection networks have played an important role in the recent advancements of image captioning. However, they are still criticized for the lack of contextual information and fine-grained details, which in contrast are the merits of traditional grid features. In this paper, we introduce a novel Dual-Level Collaborative Transformer (DLCT) network to realize the complementary advantages of the two features. Concretely, in DLCT, these two features are first processed by a novelDual-way Self Attenion (DWSA) to mine their intrinsic properties, where a Comprehensive Relation Attention component is also introduced to embed the geometric information. In addition, we propose a Locality-Constrained Cross Attention module to address the semantic noises caused by the direct fusion of these two features, where a geometric alignment graph is constructed to accurately align and reinforce region and grid features. To validate our model, we conduct extensive experiments on the highly competitive MS-COCO dataset, and achieve new state-of-the-art performance on both local and online test sets, i.e., 133.8% CIDEr-D on Karpathy split and 135.4% CIDEr on the official split. Code is available at https://github.com/luo3300612/image-captioning-DLCT."]], "0_corpus": "arxiv", "1_conv_id": "00c2be88cf054af7b54b834562acefa7", "1_model_name": "text-embedding-3-large", "1_prompt": "A novel approach in image captioning that integrates DETR-based detection with dual visual features.", "1_output": [["A novel approach in image captioning that integrates DETR-based detection with dual visual features.", "Title: Dual-Level Collaborative Transformer for Image Captioning\n\nAbstract: Descriptive region features extracted by object detection networks have played an important role in the recent advancements of image captioning. However, they are still criticized for the lack of contextual information and fine-grained details, which in contrast are the merits of traditional grid features. In this paper, we introduce a novel Dual-Level Collaborative Transformer (DLCT) network to realize the complementary advantages of the two features. Concretely, in DLCT, these two features are first processed by a novelDual-way Self Attenion (DWSA) to mine their intrinsic properties, where a Comprehensive Relation Attention component is also introduced to embed the geometric information. In addition, we propose a Locality-Constrained Cross Attention module to address the semantic noises caused by the direct fusion of these two features, where a geometric alignment graph is constructed to accurately align and reinforce region and grid features. To validate our model, we conduct extensive experiments on the highly competitive MS-COCO dataset, and achieve new state-of-the-art performance on both local and online test sets, i.e., 133.8% CIDEr-D on Karpathy split and 135.4% CIDEr on the official split. Code is available at https://github.com/luo3300612/image-captioning-DLCT."]], "1_corpus": "arxiv"}
|
82 |
+
{"tstamp": 1722729471.1689, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "eec2fc96d6cc4ffebda3074e4720c27e", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Could GPT models pass typical assessments in university-level Python programming courses?", "0_output": [["Could GPT models pass typical assessments in university-level Python programming courses?", "Title: Proposal for an experiment to measure the Hausdorff dimension of quantum mechanical trajectories\n\nAbstract: We make a proposal for a Gedanken experiment, based on the Aharonov-Bohm effect, how to measure in principle the zig-zagness of the trajectory of propagation (abberation from its classical trajectory) of a massive particle in quantum mechanics. Experiment I is conceived to show that contributions from quantum paths abberating from the classical trajectory are directly observable. Experiment II is conceived to measure average length, scaling behavior and critical exponent (Hausdorff dimension) of quantum mechanical paths."]], "0_corpus": "arxiv", "1_conv_id": "65843c2f976142e9a49b427540c4f989", "1_model_name": "text-embedding-004", "1_prompt": "Could GPT models pass typical assessments in university-level Python programming courses?", "1_output": [["Could GPT models pass typical assessments in university-level Python programming courses?", "Title: Can Generative Pre-trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses?\n\nAbstract: We evaluated the capability of generative pre-trained transformers (GPT), to pass assessments in introductory and intermediate Python programming courses at the postsecondary level. Discussions of potential uses (e.g., exercise generation, code explanation) and misuses (e.g., cheating) of this emerging technology in programming education have intensified, but to date there has not been a rigorous analysis of the models' capabilities in the realistic context of a full-fledged programming course with diverse set of assessment instruments. We evaluated GPT on three Python courses that employ assessments ranging from simple multiple-choice questions (no code involved) to complex programming projects with code bases distributed into multiple files (599 exercises overall). Further, we studied if and how successfully GPT models leverage feedback provided by an auto-grader. We found that the current models are not capable of passing the full spectrum of assessments typically involved in a Python programming course (<70% on even entry-level modules). Yet, it is clear that a straightforward application of these easily accessible models could enable a learner to obtain a non-trivial portion of the overall available score (>55%) in introductory and intermediate courses alike. While the models exhibit remarkable capabilities, including correcting solutions based on auto-grader's feedback, some limitations exist (e.g., poor handling of exercises requiring complex chains of reasoning steps). These findings can be leveraged by instructors wishing to adapt their assessments so that GPT becomes a valuable assistant for a learner as opposed to an end-to-end solution."]], "1_corpus": "arxiv"}
|
83 |
+
{"tstamp": 1722729517.8617, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "6acdfbcf7e4b478a88e47e4dd75b66e2", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "I am looking for a paper that uses attention mechanisms for predicting the next location in a user's trajectory.", "0_output": [["I am looking for a paper that uses attention mechanisms for predicting the next location in a user's trajectory.", "Title: The light curve of the semiregular variable L2 Puppis: II. Evidence for solar-like excitation of the oscillations\n\nAbstract: We analyse visual observations of the pulsations of the red giant variable L2 Pup. The data cover 77 years between 1927 and 2005, thus providing an extensive empirical base for characterizing properties of the oscillations. The power spectrum of the light curve shows a single mode resolved into multiple peaks under a narrow envelope. We argue that this results from stochastic excitation, as seen in solar oscillations. The random fluctuations in phase also support this idea. A comparison with X Cam, a true Mira star with the same pulsation period, and W Cyg, a true semiregular star, illustrates the basic differences in phase behaviours. The Mira shows very stable phase, consistent with excitation by the kappa-mechanism, whereas W Cyg shows large phase fluctuations that imply stochastic excitation. We find L2 Pup to be intermediate, implying that both mechanisms play a role in its pulsation. Finally, we also checked the presence of low-dimensional chaos and could safely exclude it."]], "0_corpus": "arxiv", "1_conv_id": "a6bc3c1046fd4675b9a987e4f14491e1", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "I am looking for a paper that uses attention mechanisms for predicting the next location in a user's trajectory.", "1_output": [["I am looking for a paper that uses attention mechanisms for predicting the next location in a user's trajectory.", "Title: SanMove: Next Location Recommendation via Self-Attention Network\n\nAbstract: Currently, next location recommendation plays a vital role in location-based social network applications and services. Although many methods have been proposed to solve this problem, three important challenges have not been well addressed so far: (1) most existing methods are based on recurrent network, which is time-consuming to train long sequences due to not allowing for full parallelism; (2) personalized preferences generally are not considered reasonably; (3) existing methods rarely systematically studied how to efficiently utilize various auxiliary information (e.g., user ID and timestamp) in trajectory data and the spatio-temporal relations among non-consecutive locations. To address the above challenges, we propose a novel method named SanMove, a self-attention network based model, to predict the next location via capturing the long- and short-term mobility patterns of users. Specifically, SanMove introduces a long-term preference learning module, and it uses a self-attention module to capture the users long-term mobility pattern which can represent personalized location preferences of users. Meanwhile, SanMove uses a spatial-temporal guided non-invasive self-attention (STNOVA) to exploit auxiliary information to learn short-term preferences. We evaluate SanMove with two real-world datasets, and demonstrate SanMove is not only faster than the state-of-the-art RNN-based predict model but also outperforms the baselines for next location prediction."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-1948e9c4-e613-4487-b12e-0e988013b994.jsonl
CHANGED
@@ -260,3 +260,11 @@
|
|
260 |
{"tstamp": 1722729396.19, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722729395.6753, "finish": 1722729396.19, "ip": "", "conv_id": "f74477e2812949ec86b4d7473162267c", "model_name": "text-embedding-3-large", "prompt": "Study on representational similarity between layers of neural language models and fMRI brain imaging data during story reading", "output": [["Study on representational similarity between layers of neural language models and fMRI brain imaging data during story reading", "Title: Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects\n\nAbstract: A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli (e.g. regular speech versus scrambled words, sentences, or paragraphs). Although successful, this `model-free' approach necessitates the acquisition of a large and costly set of neuroimaging data. Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli. We capitalize on the recently-discovered similarities between deep language models and the human brain to compute the mapping between i) the brain responses to regular speech and ii) the activations of deep language models elicited by modified stimuli (e.g. scrambled words, sentences, or paragraphs). Our model-based approach successfully replicates the seminal study of Lerner et al. (2011), which revealed the hierarchy of language areas by comparing the functional-magnetic resonance imaging (fMRI) of seven subjects listening to 7min of both regular and scrambled narratives. We further extend and precise these results to the brain signals of 305 individuals listening to 4.1 hours of narrated stories. Overall, this study paves the way for efficient and flexible analyses of the brain bases of language."]], "corpus": "arxiv"}
|
261 |
{"tstamp": 1722729421.6484, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722729419.4672, "finish": 1722729421.6484, "ip": "", "conv_id": "39ec4e24a1a343e7afbe8166d494e8fe", "model_name": "BM25", "prompt": "is the cmos battery rechargeable?", "output": [["is the cmos battery rechargeable?", "Superuser Stackexchange\n\nQ: Is a CMOS battery rechargeable? Is a CMOS battery rechargeable? If so, what does its charger look like?\n\nA: Some motherboards used to have rechargable CMOS batteries on them.\nIn general now, no CMOS battery (especially the silver button cell type) are rechargable without investing in a specific rechargable version.\nDo not - I repeat - DO NOT attempt to recharge a non-rechargable lithium cell (button type) as the resultant explosion and noxious gasses could kill you (or worse).\n"]], "corpus": "stackexchange"}
|
262 |
{"tstamp": 1722729421.6484, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722729419.4672, "finish": 1722729421.6484, "ip": "", "conv_id": "2c1f7f23b4f94eaab20ead1899cc5b12", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "is the cmos battery rechargeable?", "output": [["is the cmos battery rechargeable?", "Superuser Stackexchange\n\nQ: Is a CMOS battery rechargeable? Is a CMOS battery rechargeable? If so, what does its charger look like?\n\nA: Some motherboards used to have rechargable CMOS batteries on them.\nIn general now, no CMOS battery (especially the silver button cell type) are rechargable without investing in a specific rechargable version.\nDo not - I repeat - DO NOT attempt to recharge a non-rechargable lithium cell (button type) as the resultant explosion and noxious gasses could kill you (or worse).\n"]], "corpus": "stackexchange"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
260 |
{"tstamp": 1722729396.19, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722729395.6753, "finish": 1722729396.19, "ip": "", "conv_id": "f74477e2812949ec86b4d7473162267c", "model_name": "text-embedding-3-large", "prompt": "Study on representational similarity between layers of neural language models and fMRI brain imaging data during story reading", "output": [["Study on representational similarity between layers of neural language models and fMRI brain imaging data during story reading", "Title: Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects\n\nAbstract: A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli (e.g. regular speech versus scrambled words, sentences, or paragraphs). Although successful, this `model-free' approach necessitates the acquisition of a large and costly set of neuroimaging data. Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli. We capitalize on the recently-discovered similarities between deep language models and the human brain to compute the mapping between i) the brain responses to regular speech and ii) the activations of deep language models elicited by modified stimuli (e.g. scrambled words, sentences, or paragraphs). Our model-based approach successfully replicates the seminal study of Lerner et al. (2011), which revealed the hierarchy of language areas by comparing the functional-magnetic resonance imaging (fMRI) of seven subjects listening to 7min of both regular and scrambled narratives. We further extend and precise these results to the brain signals of 305 individuals listening to 4.1 hours of narrated stories. Overall, this study paves the way for efficient and flexible analyses of the brain bases of language."]], "corpus": "arxiv"}
|
261 |
{"tstamp": 1722729421.6484, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722729419.4672, "finish": 1722729421.6484, "ip": "", "conv_id": "39ec4e24a1a343e7afbe8166d494e8fe", "model_name": "BM25", "prompt": "is the cmos battery rechargeable?", "output": [["is the cmos battery rechargeable?", "Superuser Stackexchange\n\nQ: Is a CMOS battery rechargeable? Is a CMOS battery rechargeable? If so, what does its charger look like?\n\nA: Some motherboards used to have rechargable CMOS batteries on them.\nIn general now, no CMOS battery (especially the silver button cell type) are rechargable without investing in a specific rechargable version.\nDo not - I repeat - DO NOT attempt to recharge a non-rechargable lithium cell (button type) as the resultant explosion and noxious gasses could kill you (or worse).\n"]], "corpus": "stackexchange"}
|
262 |
{"tstamp": 1722729421.6484, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722729419.4672, "finish": 1722729421.6484, "ip": "", "conv_id": "2c1f7f23b4f94eaab20ead1899cc5b12", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "is the cmos battery rechargeable?", "output": [["is the cmos battery rechargeable?", "Superuser Stackexchange\n\nQ: Is a CMOS battery rechargeable? Is a CMOS battery rechargeable? If so, what does its charger look like?\n\nA: Some motherboards used to have rechargable CMOS batteries on them.\nIn general now, no CMOS battery (especially the silver button cell type) are rechargable without investing in a specific rechargable version.\nDo not - I repeat - DO NOT attempt to recharge a non-rechargable lithium cell (button type) as the resultant explosion and noxious gasses could kill you (or worse).\n"]], "corpus": "stackexchange"}
|
263 |
+
{"tstamp": 1722729445.3239, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722729444.6724, "finish": 1722729445.3239, "ip": "", "conv_id": "a07ee681e4ce4b77adb6eace147435d1", "model_name": "embed-english-v3.0", "prompt": "A novel approach in image captioning that integrates DETR-based detection with dual visual features.", "output": [["A novel approach in image captioning that integrates DETR-based detection with dual visual features.", "Title: Dual-Level Collaborative Transformer for Image Captioning\n\nAbstract: Descriptive region features extracted by object detection networks have played an important role in the recent advancements of image captioning. However, they are still criticized for the lack of contextual information and fine-grained details, which in contrast are the merits of traditional grid features. In this paper, we introduce a novel Dual-Level Collaborative Transformer (DLCT) network to realize the complementary advantages of the two features. Concretely, in DLCT, these two features are first processed by a novelDual-way Self Attenion (DWSA) to mine their intrinsic properties, where a Comprehensive Relation Attention component is also introduced to embed the geometric information. In addition, we propose a Locality-Constrained Cross Attention module to address the semantic noises caused by the direct fusion of these two features, where a geometric alignment graph is constructed to accurately align and reinforce region and grid features. To validate our model, we conduct extensive experiments on the highly competitive MS-COCO dataset, and achieve new state-of-the-art performance on both local and online test sets, i.e., 133.8% CIDEr-D on Karpathy split and 135.4% CIDEr on the official split. Code is available at https://github.com/luo3300612/image-captioning-DLCT."]], "corpus": "arxiv"}
|
264 |
+
{"tstamp": 1722729445.3239, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722729444.6724, "finish": 1722729445.3239, "ip": "", "conv_id": "00c2be88cf054af7b54b834562acefa7", "model_name": "text-embedding-3-large", "prompt": "A novel approach in image captioning that integrates DETR-based detection with dual visual features.", "output": [["A novel approach in image captioning that integrates DETR-based detection with dual visual features.", "Title: Dual-Level Collaborative Transformer for Image Captioning\n\nAbstract: Descriptive region features extracted by object detection networks have played an important role in the recent advancements of image captioning. However, they are still criticized for the lack of contextual information and fine-grained details, which in contrast are the merits of traditional grid features. In this paper, we introduce a novel Dual-Level Collaborative Transformer (DLCT) network to realize the complementary advantages of the two features. Concretely, in DLCT, these two features are first processed by a novelDual-way Self Attenion (DWSA) to mine their intrinsic properties, where a Comprehensive Relation Attention component is also introduced to embed the geometric information. In addition, we propose a Locality-Constrained Cross Attention module to address the semantic noises caused by the direct fusion of these two features, where a geometric alignment graph is constructed to accurately align and reinforce region and grid features. To validate our model, we conduct extensive experiments on the highly competitive MS-COCO dataset, and achieve new state-of-the-art performance on both local and online test sets, i.e., 133.8% CIDEr-D on Karpathy split and 135.4% CIDEr on the official split. Code is available at https://github.com/luo3300612/image-captioning-DLCT."]], "corpus": "arxiv"}
|
265 |
+
{"tstamp": 1722729463.9509, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722729461.6565, "finish": 1722729463.9509, "ip": "", "conv_id": "eec2fc96d6cc4ffebda3074e4720c27e", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Could GPT models pass typical assessments in university-level Python programming courses?", "output": [["Could GPT models pass typical assessments in university-level Python programming courses?", "Title: Proposal for an experiment to measure the Hausdorff dimension of quantum mechanical trajectories\n\nAbstract: We make a proposal for a Gedanken experiment, based on the Aharonov-Bohm effect, how to measure in principle the zig-zagness of the trajectory of propagation (abberation from its classical trajectory) of a massive particle in quantum mechanics. Experiment I is conceived to show that contributions from quantum paths abberating from the classical trajectory are directly observable. Experiment II is conceived to measure average length, scaling behavior and critical exponent (Hausdorff dimension) of quantum mechanical paths."]], "corpus": "arxiv"}
|
266 |
+
{"tstamp": 1722729463.9509, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722729461.6565, "finish": 1722729463.9509, "ip": "", "conv_id": "65843c2f976142e9a49b427540c4f989", "model_name": "text-embedding-004", "prompt": "Could GPT models pass typical assessments in university-level Python programming courses?", "output": [["Could GPT models pass typical assessments in university-level Python programming courses?", "Title: Can Generative Pre-trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses?\n\nAbstract: We evaluated the capability of generative pre-trained transformers (GPT), to pass assessments in introductory and intermediate Python programming courses at the postsecondary level. Discussions of potential uses (e.g., exercise generation, code explanation) and misuses (e.g., cheating) of this emerging technology in programming education have intensified, but to date there has not been a rigorous analysis of the models' capabilities in the realistic context of a full-fledged programming course with diverse set of assessment instruments. We evaluated GPT on three Python courses that employ assessments ranging from simple multiple-choice questions (no code involved) to complex programming projects with code bases distributed into multiple files (599 exercises overall). Further, we studied if and how successfully GPT models leverage feedback provided by an auto-grader. We found that the current models are not capable of passing the full spectrum of assessments typically involved in a Python programming course (<70% on even entry-level modules). Yet, it is clear that a straightforward application of these easily accessible models could enable a learner to obtain a non-trivial portion of the overall available score (>55%) in introductory and intermediate courses alike. While the models exhibit remarkable capabilities, including correcting solutions based on auto-grader's feedback, some limitations exist (e.g., poor handling of exercises requiring complex chains of reasoning steps). These findings can be leveraged by instructors wishing to adapt their assessments so that GPT becomes a valuable assistant for a learner as opposed to an end-to-end solution."]], "corpus": "arxiv"}
|
267 |
+
{"tstamp": 1722729486.8592, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722729486.6455, "finish": 1722729486.8592, "ip": "", "conv_id": "6acdfbcf7e4b478a88e47e4dd75b66e2", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "I am looking for a paper that uses attention mechanisms for predicting the next location in a user's trajectory.", "output": [["I am looking for a paper that uses attention mechanisms for predicting the next location in a user's trajectory.", "Title: The light curve of the semiregular variable L2 Puppis: II. Evidence for solar-like excitation of the oscillations\n\nAbstract: We analyse visual observations of the pulsations of the red giant variable L2 Pup. The data cover 77 years between 1927 and 2005, thus providing an extensive empirical base for characterizing properties of the oscillations. The power spectrum of the light curve shows a single mode resolved into multiple peaks under a narrow envelope. We argue that this results from stochastic excitation, as seen in solar oscillations. The random fluctuations in phase also support this idea. A comparison with X Cam, a true Mira star with the same pulsation period, and W Cyg, a true semiregular star, illustrates the basic differences in phase behaviours. The Mira shows very stable phase, consistent with excitation by the kappa-mechanism, whereas W Cyg shows large phase fluctuations that imply stochastic excitation. We find L2 Pup to be intermediate, implying that both mechanisms play a role in its pulsation. Finally, we also checked the presence of low-dimensional chaos and could safely exclude it."]], "corpus": "arxiv"}
|
268 |
+
{"tstamp": 1722729486.8592, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722729486.6455, "finish": 1722729486.8592, "ip": "", "conv_id": "a6bc3c1046fd4675b9a987e4f14491e1", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "I am looking for a paper that uses attention mechanisms for predicting the next location in a user's trajectory.", "output": [["I am looking for a paper that uses attention mechanisms for predicting the next location in a user's trajectory.", "Title: SanMove: Next Location Recommendation via Self-Attention Network\n\nAbstract: Currently, next location recommendation plays a vital role in location-based social network applications and services. Although many methods have been proposed to solve this problem, three important challenges have not been well addressed so far: (1) most existing methods are based on recurrent network, which is time-consuming to train long sequences due to not allowing for full parallelism; (2) personalized preferences generally are not considered reasonably; (3) existing methods rarely systematically studied how to efficiently utilize various auxiliary information (e.g., user ID and timestamp) in trajectory data and the spatio-temporal relations among non-consecutive locations. To address the above challenges, we propose a novel method named SanMove, a self-attention network based model, to predict the next location via capturing the long- and short-term mobility patterns of users. Specifically, SanMove introduces a long-term preference learning module, and it uses a self-attention module to capture the users long-term mobility pattern which can represent personalized location preferences of users. Meanwhile, SanMove uses a spatial-temporal guided non-invasive self-attention (STNOVA) to exploit auxiliary information to learn short-term preferences. We evaluate SanMove with two real-world datasets, and demonstrate SanMove is not only faster than the state-of-the-art RNN-based predict model but also outperforms the baselines for next location prediction."]], "corpus": "arxiv"}
|
269 |
+
{"tstamp": 1722729527.7473, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722729525.4819, "finish": 1722729527.7473, "ip": "", "conv_id": "20f9ce8ed1e34e8181aef49b28ded9b6", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "can i be a medic in the marines", "output": [["can i be a medic in the marines", "Hospital corpsman\n\nThey may function as clinical or specialty technicians, medical administrative personnel and health care providers at medical treatment facilities. They also serve as battlefield corpsmen with the Marine Corps, rendering emergency medical treatment to include initial treatment in a combat environment. Qualified hospital corpsmen may be assigned the responsibility of independent duty aboard ships and submarines, Fleet Marine Force, SEAL and Seabee units, and at isolated duty stations where no medical officer is available.\nHospital corpsmen were previously trained at Naval Hospital Corps School, Great Lakes, Illinois, and the U.S. Naval Hospital Corps School San Diego, California, until the 2011 Base Realignment and Closure Bill caused Hospital Corps School to be relocated to the Medical Education and Training Campus (METC) at Joint Base San Antonio, Texas. Naval Hospital Corps School was also located at NRMC Balboa in San Diego, California.\nDuring the Vietnam War, many of the 16-week Naval Hospital Corps school graduates went directly to 8404 Field Medical Service School (FMSS) at Camp Lejeune, North Carolina, or Camp Pendleton, California, for nine weeks of field training, before deployment to a Marine Corps unit in South Vietnam.\nIn the United States Marine Corps, the colloquial form of address for a Hospital Corpsman who rate to wear the Navy's Fleet Marine Force (FMF) warfare device (showing they were or are attached to an FMF Unit) is \"Doc\", which is generally used as a sign of respect. Hospital Corpsman who are not authorized to wear the device would still be referred to by their rate and rank."]], "corpus": "wikipedia"}
|
270 |
+
{"tstamp": 1722729527.7473, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722729525.4819, "finish": 1722729527.7473, "ip": "", "conv_id": "44805da2cc0b4719bb0580eb1dcfba7d", "model_name": "text-embedding-3-large", "prompt": "can i be a medic in the marines", "output": [["can i be a medic in the marines", "Combat medic\n\nIn the U.S. Navy, enlisted medical personnel are known as corpsmen, not medics. The colloquial form of address for a Hospital Corpsman and Army Medics is \"Doc\". In the Army and U.S. Marine Corps, this term is generally used as a sign of respect. The U.S. Navy deploys FMF Hospital Corpsman attached to U.S. Marine Corps units as part of the Fleet Marine Force. Since the U.S. Marine Corps is part of the Department of the Navy, it relies on Navy corpsmen and other Naval medical personnel for medical care.\nU.S. Air Force aerospace medical services technicians have frequently served attached to U.S. Army units in recent conflicts. Though all combat medical personnel are universally referred to as \"medic\", within different branches of the U.S. military, the skill level, quality of training and scope of work performed by medics varies from branch to branch and unit to unit.\nAs a result of the 2005 BRAC, the U.S. Department of Defense has moved most medical training for all branches of the armed forces to Fort Sam Houston of Joint Base San Antonio. A new Medical Education and Training Campus was constructed and the Air Force's 937th Training Group and Naval Hospital Corps School were relocated to Fort Sam Houston, joining the Army's existing Army Medical Department Center & School. Although each service has some training particular to its branch, the bulk of the course material and instruction is shared between medical personnel of the different services."]], "corpus": "wikipedia"}
|