Scheduled Commit
Browse files
data/retrieval_battle-81269da8-6dc7-417a-8e0a-245c4a753482.jsonl
CHANGED
@@ -44,3 +44,4 @@
|
|
44 |
{"tstamp": 1722401756.4486, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "87584ce5765148af9587069b47183ee5", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "state space models transformers", "0_output": [["state space models transformers", "Title: Phonon Assisted Multimagnon Optical Absorption and Long Lived Two-Magnon States in Undoped Lamellar Copper Oxides\n\nAbstract: We calculate the effective charge for multimagnon infrared (IR) absorption assisted by phonons in the parent insulating compounds of cuprate superconductors and the spectra for two-magnon absorption using interacting spin-wave theory. Recent measured bands in the mid IR [Perkins et al. Phys. Rev. Lett. {\\bf 71} 1621 (1993)] are interpreted as involving one phonon plus a two-magnon virtual bound state, and one phonon plus higher multimagnon absorption processes. The virtual bound state consists of a narrow resonance occurring when the magnon pair has total momentum close to $(\\pi,0)$."]], "0_corpus": "arxiv", "1_conv_id": "476c6bc225554409822438eb737cb20a", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "state space models transformers", "1_output": [["state space models transformers", "Title: Active Clustering with Model-Based Uncertainty Reduction\n\nAbstract: Semi-supervised clustering seeks to augment traditional clustering methods by incorporating side information provided via human expertise in order to increase the semantic meaningfulness of the resulting clusters. However, most current methods are \\emph{passive} in the sense that the side information is provided beforehand and selected randomly. This may require a large number of constraints, some of which could be redundant, unnecessary, or even detrimental to the clustering results. Thus in order to scale such semi-supervised algorithms to larger problems it is desirable to pursue an \\emph{active} clustering method---i.e. an algorithm that maximizes the effectiveness of the available human labor by only requesting human input where it will have the greatest impact. Here, we propose a novel online framework for active semi-supervised spectral clustering that selects pairwise constraints as clustering proceeds, based on the principle of uncertainty reduction. Using a first-order Taylor expansion, we decompose the expected uncertainty reduction problem into a gradient and a step-scale, computed via an application of matrix perturbation theory and cluster-assignment entropy, respectively. The resulting model is used to estimate the uncertainty reduction potential of each sample in the dataset. We then present the human user with pairwise queries with respect to only the best candidate sample. We evaluate our method using three different image datasets (faces, leaves and dogs), a set of common UCI machine learning datasets and a gene dataset. The results validate our decomposition formulation and show that our method is consistently superior to existing state-of-the-art techniques, as well as being robust to noise and to unknown numbers of clusters."]], "1_corpus": "arxiv"}
|
45 |
{"tstamp": 1722401773.3734, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b183c66d1a864220b961fd3aab8a2786", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "machine learning ssm models mambda", "0_output": [["machine learning ssm models mambda", "Title: Bayesian Inference for Linear Dynamic Models with Dirichlet Process Mixtures\n\nAbstract: Using Kalman techniques, it is possible to perform optimal estimation in linear Gaussian state-space models. We address here the case where the noise probability density functions are of unknown functional form. A flexible Bayesian nonparametric noise model based on Dirichlet process mixtures is introduced. Efficient Markov chain Monte Carlo and Sequential Monte Carlo methods are then developed to perform optimal batch and sequential estimation in such contexts. The algorithms are applied to blind deconvolution and change point detection. Experimental results on synthetic and real data demonstrate the efficiency of this approach in various contexts."]], "0_corpus": "arxiv", "1_conv_id": "076e03e824ba49ca95245f716214aec3", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "machine learning ssm models mambda", "1_output": [["machine learning ssm models mambda", "Title: Vision Mamba: A Comprehensive Survey and Taxonomy\n\nAbstract: State Space Model (SSM) is a mathematical model used to describe and analyze the behavior of dynamic systems. This model has witnessed numerous applications in several fields, including control theory, signal processing, economics and machine learning. In the field of deep learning, state space models are used to process sequence data, such as time series analysis, natural language processing (NLP) and video understanding. By mapping sequence data to state space, long-term dependencies in the data can be better captured. In particular, modern SSMs have shown strong representational capabilities in NLP, especially in long sequence modeling, while maintaining linear time complexity. Notably, based on the latest state-space models, Mamba merges time-varying parameters into SSMs and formulates a hardware-aware algorithm for efficient training and inference. Given its impressive efficiency and strong long-range dependency modeling capability, Mamba is expected to become a new AI architecture that may outperform Transformer. Recently, a number of works have attempted to study the potential of Mamba in various fields, such as general vision, multi-modal, medical image analysis and remote sensing image analysis, by extending Mamba from natural language domain to visual domain. To fully understand Mamba in the visual domain, we conduct a comprehensive survey and present a taxonomy study. This survey focuses on Mamba's application to a variety of visual tasks and data types, and discusses its predecessors, recent advances and far-reaching impact on a wide range of domains. Since Mamba is now on an upward trend, please actively notice us if you have new findings, and new progress on Mamba will be included in this survey in a timely manner and updated on the Mamba project at https://github.com/lx6c78/Vision-Mamba-A-Comprehensive-Survey-and-Taxonomy."]], "1_corpus": "arxiv"}
|
46 |
{"tstamp": 1722403592.4887, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "6f8b359afa074fe9b99960410ca68854", "0_model_name": "embed-english-v3.0", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "ed9faaf063f642cd872d3b354dbe1376", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "1_corpus": "arxiv"}
|
|
|
|
44 |
{"tstamp": 1722401756.4486, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "87584ce5765148af9587069b47183ee5", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "state space models transformers", "0_output": [["state space models transformers", "Title: Phonon Assisted Multimagnon Optical Absorption and Long Lived Two-Magnon States in Undoped Lamellar Copper Oxides\n\nAbstract: We calculate the effective charge for multimagnon infrared (IR) absorption assisted by phonons in the parent insulating compounds of cuprate superconductors and the spectra for two-magnon absorption using interacting spin-wave theory. Recent measured bands in the mid IR [Perkins et al. Phys. Rev. Lett. {\\bf 71} 1621 (1993)] are interpreted as involving one phonon plus a two-magnon virtual bound state, and one phonon plus higher multimagnon absorption processes. The virtual bound state consists of a narrow resonance occurring when the magnon pair has total momentum close to $(\\pi,0)$."]], "0_corpus": "arxiv", "1_conv_id": "476c6bc225554409822438eb737cb20a", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "state space models transformers", "1_output": [["state space models transformers", "Title: Active Clustering with Model-Based Uncertainty Reduction\n\nAbstract: Semi-supervised clustering seeks to augment traditional clustering methods by incorporating side information provided via human expertise in order to increase the semantic meaningfulness of the resulting clusters. However, most current methods are \\emph{passive} in the sense that the side information is provided beforehand and selected randomly. This may require a large number of constraints, some of which could be redundant, unnecessary, or even detrimental to the clustering results. Thus in order to scale such semi-supervised algorithms to larger problems it is desirable to pursue an \\emph{active} clustering method---i.e. an algorithm that maximizes the effectiveness of the available human labor by only requesting human input where it will have the greatest impact. Here, we propose a novel online framework for active semi-supervised spectral clustering that selects pairwise constraints as clustering proceeds, based on the principle of uncertainty reduction. Using a first-order Taylor expansion, we decompose the expected uncertainty reduction problem into a gradient and a step-scale, computed via an application of matrix perturbation theory and cluster-assignment entropy, respectively. The resulting model is used to estimate the uncertainty reduction potential of each sample in the dataset. We then present the human user with pairwise queries with respect to only the best candidate sample. We evaluate our method using three different image datasets (faces, leaves and dogs), a set of common UCI machine learning datasets and a gene dataset. The results validate our decomposition formulation and show that our method is consistently superior to existing state-of-the-art techniques, as well as being robust to noise and to unknown numbers of clusters."]], "1_corpus": "arxiv"}
|
45 |
{"tstamp": 1722401773.3734, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b183c66d1a864220b961fd3aab8a2786", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "machine learning ssm models mambda", "0_output": [["machine learning ssm models mambda", "Title: Bayesian Inference for Linear Dynamic Models with Dirichlet Process Mixtures\n\nAbstract: Using Kalman techniques, it is possible to perform optimal estimation in linear Gaussian state-space models. We address here the case where the noise probability density functions are of unknown functional form. A flexible Bayesian nonparametric noise model based on Dirichlet process mixtures is introduced. Efficient Markov chain Monte Carlo and Sequential Monte Carlo methods are then developed to perform optimal batch and sequential estimation in such contexts. The algorithms are applied to blind deconvolution and change point detection. Experimental results on synthetic and real data demonstrate the efficiency of this approach in various contexts."]], "0_corpus": "arxiv", "1_conv_id": "076e03e824ba49ca95245f716214aec3", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "machine learning ssm models mambda", "1_output": [["machine learning ssm models mambda", "Title: Vision Mamba: A Comprehensive Survey and Taxonomy\n\nAbstract: State Space Model (SSM) is a mathematical model used to describe and analyze the behavior of dynamic systems. This model has witnessed numerous applications in several fields, including control theory, signal processing, economics and machine learning. In the field of deep learning, state space models are used to process sequence data, such as time series analysis, natural language processing (NLP) and video understanding. By mapping sequence data to state space, long-term dependencies in the data can be better captured. In particular, modern SSMs have shown strong representational capabilities in NLP, especially in long sequence modeling, while maintaining linear time complexity. Notably, based on the latest state-space models, Mamba merges time-varying parameters into SSMs and formulates a hardware-aware algorithm for efficient training and inference. Given its impressive efficiency and strong long-range dependency modeling capability, Mamba is expected to become a new AI architecture that may outperform Transformer. Recently, a number of works have attempted to study the potential of Mamba in various fields, such as general vision, multi-modal, medical image analysis and remote sensing image analysis, by extending Mamba from natural language domain to visual domain. To fully understand Mamba in the visual domain, we conduct a comprehensive survey and present a taxonomy study. This survey focuses on Mamba's application to a variety of visual tasks and data types, and discusses its predecessors, recent advances and far-reaching impact on a wide range of domains. Since Mamba is now on an upward trend, please actively notice us if you have new findings, and new progress on Mamba will be included in this survey in a timely manner and updated on the Mamba project at https://github.com/lx6c78/Vision-Mamba-A-Comprehensive-Survey-and-Taxonomy."]], "1_corpus": "arxiv"}
|
46 |
{"tstamp": 1722403592.4887, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "6f8b359afa074fe9b99960410ca68854", "0_model_name": "embed-english-v3.0", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "ed9faaf063f642cd872d3b354dbe1376", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "1_corpus": "arxiv"}
|
47 |
+
{"tstamp": 1722404064.2696, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "1f792446858649febdcd92f6bf7b0b37", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Research on combining sentiment analysis and market data for volatility forecasting.", "0_output": [["Research on combining sentiment analysis and market data for volatility forecasting.", "Title: Volatility forecasting using Deep Learning and sentiment analysis\n\nAbstract: Several studies have shown that deep learning models can provide more accurate volatility forecasts than the traditional methods used within this domain. This paper presents a composite model that merges a deep learning approach with sentiment analysis for predicting market volatility. To classify public sentiment, we use a Convolutional Neural Network, which obtained data from Reddit global news headlines. We then describe a composite forecasting model, a Long-Short-Term-Memory Neural Network method, to use historical sentiment and the previous day's volatility to make forecasts. We employed this method on the past volatility of the S&P500 and the major BRICS indices to corroborate its effectiveness. Our results demonstrate that including sentiment can improve Deep Learning volatility forecasting models. However, in contrast to return forecasting, the performance benefits of including sentiment appear for volatility forecasting appears to be market specific."]], "0_corpus": "arxiv", "1_conv_id": "50604820dc7b45a784d897f43d88f88f", "1_model_name": "text-embedding-004", "1_prompt": "Research on combining sentiment analysis and market data for volatility forecasting.", "1_output": [["Research on combining sentiment analysis and market data for volatility forecasting.", "Title: A Sentiment Analysis Approach to the Prediction of Market Volatility\n\nAbstract: Prediction and quantification of future volatility and returns play an important role in financial modelling, both in portfolio optimization and risk management. Natural language processing today allows to process news and social media comments to detect signals of investors' confidence. We have explored the relationship between sentiment extracted from financial news and tweets and FTSE100 movements. We investigated the strength of the correlation between sentiment measures on a given day and market volatility and returns observed the next day. The findings suggest that there is evidence of correlation between sentiment and stock market movements: the sentiment captured from news headlines could be used as a signal to predict market returns; the same does not apply for volatility. Also, in a surprising finding, for the sentiment found in Twitter comments we obtained a correlation coefficient of -0.7, and p-value below 0.05, which indicates a strong negative correlation between positive sentiment captured from the tweets on a given day and the volatility observed the next day. We developed an accurate classifier for the prediction of market volatility in response to the arrival of new information by deploying topic modelling, based on Latent Dirichlet Allocation, to extract feature vectors from a collection of tweets and financial news. The obtained features were used as additional input to the classifier. Thanks to the combination of sentiment and topic modelling our classifier achieved a directional prediction accuracy for volatility of 63%."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-f62bcad9-19c4-430b-a7ba-aa2bfeba99dc.jsonl
CHANGED
@@ -2,3 +2,5 @@
|
|
2 |
{"tstamp": 1722404021.5866, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722404020.1107, "finish": 1722404021.5866, "ip": "", "conv_id": "cfd99c56377e4533b7115306e6bf5162", "model_name": "embed-english-v3.0", "prompt": "who scored the most points in a game nba history", "output": [["who scored the most points in a game nba history", "List of highest-scoring NBA games\n\nSummary The highest-scoring regular-season game in NBA history is the triple-overtime game between the Detroit Pistons and the Denver Nuggets on December 13, 1983. The two teams combined to score 370 points, with the Pistons defeating the Nuggets 186–184. An NBA-record four players scored over 40 points in the game, including the Nuggets' Kiki Vandeweghe with a game-high 51. The two teams also set several other NBA records, including the most points scored by one team (186 points), the most points scored by a losing team (184), the most field goals by two teams (142), most field goals by one team (74) and most assists by two teams (93).\nThe highest-scoring regular season game in regulation was between the Golden State Warriors and the Denver Nuggets on November 2, 1990. In that game, Golden State defeated Denver 162–158. The Warriors' Chris Mullin scored a game-high 38 points. The Nuggets were coached by Doug Moe from 1980 to 1990 and Paul Westhead from 1990 to 1992, both of whom employed a run-and-gun offensive strategy, which focuses on attempting a high number of field goals while also conceding a large number of points on defense. In fact, Moe's and Westhead's Nuggets were participants in four of the ten highest-scoring regular season games in NBA history. The Warriors were coached by Don Nelson from 1988 to 1995 and 2006 to 2010. He employed Nellie Ball, a style of run and gun that uses smaller, more athletic players to outrun opponents. Another notable high-scoring regular season game is a March 2, 1962, game between the Philadelphia Warriors and the New York Knicks. In that game, the Warriors' Wilt Chamberlain scored an NBA-record 100 points."]], "corpus": "wikipedia"}
|
3 |
{"tstamp": 1722404025.3419, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722404024.8194, "finish": 1722404025.3419, "ip": "", "conv_id": "5ef17d307c164fcf9ce84bb32d4ddf44", "model_name": "GritLM/GritLM-7B", "prompt": "will system restore remove malware?", "output": [["will system restore remove malware?", "Superuser Stackexchange\n\nQ: Can system restore remove virus from the computer? Can a system restore to a past time remove the virus from the computer?\n\nA: The short answer is no.\nSystem Restore can help spread a virus though.\nSee here for more info: Removing Viruses with System Restore. Quoting the relevant parts:\n\nWhen System Restore first appeared in the market, computer manufacturers and other companies claimed ... that it could remove viruses and fix other problems with the computer.\nIn reality, System Restore can create copies of the infected files. And some viruses may be capable of infecting the restore volume as well as the actual system files. When a person cleans their computer using an anti-virus, then uses System Restore, they may inadvertently re-infect the computer. Or if they use System Restore as a means of removal, either the restore will fail (if the anti-virus cleans the virus out during the restore process) or the restore will replace the file with an infected version.\n\n"]], "corpus": "stackexchange"}
|
4 |
{"tstamp": 1722404025.3419, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722404024.8194, "finish": 1722404025.3419, "ip": "", "conv_id": "6a4fe1dc03d94381ade5917f12c7933a", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "will system restore remove malware?", "output": [["will system restore remove malware?", "Superuser Stackexchange\n\nQ: Can system restore remove virus from the computer? Can a system restore to a past time remove the virus from the computer?\n\nA: The short answer is no.\nSystem Restore can help spread a virus though.\nSee here for more info: Removing Viruses with System Restore. Quoting the relevant parts:\n\nWhen System Restore first appeared in the market, computer manufacturers and other companies claimed ... that it could remove viruses and fix other problems with the computer.\nIn reality, System Restore can create copies of the infected files. And some viruses may be capable of infecting the restore volume as well as the actual system files. When a person cleans their computer using an anti-virus, then uses System Restore, they may inadvertently re-infect the computer. Or if they use System Restore as a means of removal, either the restore will fail (if the anti-virus cleans the virus out during the restore process) or the restore will replace the file with an infected version.\n\n"]], "corpus": "stackexchange"}
|
|
|
|
|
|
2 |
{"tstamp": 1722404021.5866, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722404020.1107, "finish": 1722404021.5866, "ip": "", "conv_id": "cfd99c56377e4533b7115306e6bf5162", "model_name": "embed-english-v3.0", "prompt": "who scored the most points in a game nba history", "output": [["who scored the most points in a game nba history", "List of highest-scoring NBA games\n\nSummary The highest-scoring regular-season game in NBA history is the triple-overtime game between the Detroit Pistons and the Denver Nuggets on December 13, 1983. The two teams combined to score 370 points, with the Pistons defeating the Nuggets 186–184. An NBA-record four players scored over 40 points in the game, including the Nuggets' Kiki Vandeweghe with a game-high 51. The two teams also set several other NBA records, including the most points scored by one team (186 points), the most points scored by a losing team (184), the most field goals by two teams (142), most field goals by one team (74) and most assists by two teams (93).\nThe highest-scoring regular season game in regulation was between the Golden State Warriors and the Denver Nuggets on November 2, 1990. In that game, Golden State defeated Denver 162–158. The Warriors' Chris Mullin scored a game-high 38 points. The Nuggets were coached by Doug Moe from 1980 to 1990 and Paul Westhead from 1990 to 1992, both of whom employed a run-and-gun offensive strategy, which focuses on attempting a high number of field goals while also conceding a large number of points on defense. In fact, Moe's and Westhead's Nuggets were participants in four of the ten highest-scoring regular season games in NBA history. The Warriors were coached by Don Nelson from 1988 to 1995 and 2006 to 2010. He employed Nellie Ball, a style of run and gun that uses smaller, more athletic players to outrun opponents. Another notable high-scoring regular season game is a March 2, 1962, game between the Philadelphia Warriors and the New York Knicks. In that game, the Warriors' Wilt Chamberlain scored an NBA-record 100 points."]], "corpus": "wikipedia"}
|
3 |
{"tstamp": 1722404025.3419, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722404024.8194, "finish": 1722404025.3419, "ip": "", "conv_id": "5ef17d307c164fcf9ce84bb32d4ddf44", "model_name": "GritLM/GritLM-7B", "prompt": "will system restore remove malware?", "output": [["will system restore remove malware?", "Superuser Stackexchange\n\nQ: Can system restore remove virus from the computer? Can a system restore to a past time remove the virus from the computer?\n\nA: The short answer is no.\nSystem Restore can help spread a virus though.\nSee here for more info: Removing Viruses with System Restore. Quoting the relevant parts:\n\nWhen System Restore first appeared in the market, computer manufacturers and other companies claimed ... that it could remove viruses and fix other problems with the computer.\nIn reality, System Restore can create copies of the infected files. And some viruses may be capable of infecting the restore volume as well as the actual system files. When a person cleans their computer using an anti-virus, then uses System Restore, they may inadvertently re-infect the computer. Or if they use System Restore as a means of removal, either the restore will fail (if the anti-virus cleans the virus out during the restore process) or the restore will replace the file with an infected version.\n\n"]], "corpus": "stackexchange"}
|
4 |
{"tstamp": 1722404025.3419, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722404024.8194, "finish": 1722404025.3419, "ip": "", "conv_id": "6a4fe1dc03d94381ade5917f12c7933a", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "will system restore remove malware?", "output": [["will system restore remove malware?", "Superuser Stackexchange\n\nQ: Can system restore remove virus from the computer? Can a system restore to a past time remove the virus from the computer?\n\nA: The short answer is no.\nSystem Restore can help spread a virus though.\nSee here for more info: Removing Viruses with System Restore. Quoting the relevant parts:\n\nWhen System Restore first appeared in the market, computer manufacturers and other companies claimed ... that it could remove viruses and fix other problems with the computer.\nIn reality, System Restore can create copies of the infected files. And some viruses may be capable of infecting the restore volume as well as the actual system files. When a person cleans their computer using an anti-virus, then uses System Restore, they may inadvertently re-infect the computer. Or if they use System Restore as a means of removal, either the restore will fail (if the anti-virus cleans the virus out during the restore process) or the restore will replace the file with an infected version.\n\n"]], "corpus": "stackexchange"}
|
5 |
+
{"tstamp": 1722404070.8053, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722404069.9427, "finish": 1722404070.8053, "ip": "", "conv_id": "4c228e854b09431e85d5b528f5c976b6", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "How can I automatically generate code-switching sentences for language models using GANs?", "output": [["How can I automatically generate code-switching sentences for language models using GANs?", "Title: Code-switching Sentence Generation by Generative Adversarial Networks and its Application to Data Augmentation\n\nAbstract: Code-switching is about dealing with alternative languages in speech or text. It is partially speaker-depend and domain-related, so completely explaining the phenomenon by linguistic rules is challenging. Compared to most monolingual tasks, insufficient data is an issue for code-switching. To mitigate the issue without expensive human annotation, we proposed an unsupervised method for code-switching data augmentation. By utilizing a generative adversarial network, we can generate intra-sentential code-switching sentences from monolingual sentences. We applied proposed method on two corpora, and the result shows that the generated code-switching sentences improve the performance of code-switching language models."]], "corpus": "arxiv"}
|
6 |
+
{"tstamp": 1722404070.8053, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722404069.9427, "finish": 1722404070.8053, "ip": "", "conv_id": "187adc143b07496aa26bec9a0fd3ab2e", "model_name": "text-embedding-3-large", "prompt": "How can I automatically generate code-switching sentences for language models using GANs?", "output": [["How can I automatically generate code-switching sentences for language models using GANs?", "Title: Code-switching Sentence Generation by Generative Adversarial Networks and its Application to Data Augmentation\n\nAbstract: Code-switching is about dealing with alternative languages in speech or text. It is partially speaker-depend and domain-related, so completely explaining the phenomenon by linguistic rules is challenging. Compared to most monolingual tasks, insufficient data is an issue for code-switching. To mitigate the issue without expensive human annotation, we proposed an unsupervised method for code-switching data augmentation. By utilizing a generative adversarial network, we can generate intra-sentential code-switching sentences from monolingual sentences. We applied proposed method on two corpora, and the result shows that the generated code-switching sentences improve the performance of code-switching language models."]], "corpus": "arxiv"}
|