Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
80d984f
·
verified ·
1 Parent(s): 90e6938

Scheduled Commit

Browse files
data/clustering_battle-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -1,2 +1,3 @@
1
  {"tstamp": 1723244337.8557, "task_type": "clustering", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "8ee353339b874131883f4e758c4fb6c2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": ["Apple", "Samsung", "Huawei", "LG", "OnePlus", "Xiaomi", "wisdom tooth", "molar", "incisor", "premolar", "canine", "Opera", "Safari", "Brave", "Edge", "Firefox", "Chrome", "Capricorn", "Leo", "Taurus", "Aries", "Scorpio", "Libra", "Cancer", "Gemini", "canoe", "motorboat", "yacht", "catamaran"], "0_ncluster": 5, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "dca734cd08a946dc991a76cbd4459fd7", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": ["Apple", "Samsung", "Huawei", "LG", "OnePlus", "Xiaomi", "wisdom tooth", "molar", "incisor", "premolar", "canine", "Opera", "Safari", "Brave", "Edge", "Firefox", "Chrome", "Capricorn", "Leo", "Taurus", "Aries", "Scorpio", "Libra", "Cancer", "Gemini", "canoe", "motorboat", "yacht", "catamaran"], "1_ncluster": 5, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
2
  {"tstamp": 1723409498.4434, "task_type": "clustering", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "98569eae160342459406a000ac1341bd", "0_model_name": "GritLM/GritLM-7B", "0_prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design", "jack stud design"], "0_ncluster": 2, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "ae75439bc445400696766ec73cdf8625", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design", "jack stud design"], "1_ncluster": 2, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
 
 
1
  {"tstamp": 1723244337.8557, "task_type": "clustering", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "8ee353339b874131883f4e758c4fb6c2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": ["Apple", "Samsung", "Huawei", "LG", "OnePlus", "Xiaomi", "wisdom tooth", "molar", "incisor", "premolar", "canine", "Opera", "Safari", "Brave", "Edge", "Firefox", "Chrome", "Capricorn", "Leo", "Taurus", "Aries", "Scorpio", "Libra", "Cancer", "Gemini", "canoe", "motorboat", "yacht", "catamaran"], "0_ncluster": 5, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "dca734cd08a946dc991a76cbd4459fd7", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": ["Apple", "Samsung", "Huawei", "LG", "OnePlus", "Xiaomi", "wisdom tooth", "molar", "incisor", "premolar", "canine", "Opera", "Safari", "Brave", "Edge", "Firefox", "Chrome", "Capricorn", "Leo", "Taurus", "Aries", "Scorpio", "Libra", "Cancer", "Gemini", "canoe", "motorboat", "yacht", "catamaran"], "1_ncluster": 5, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
2
  {"tstamp": 1723409498.4434, "task_type": "clustering", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "98569eae160342459406a000ac1341bd", "0_model_name": "GritLM/GritLM-7B", "0_prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design", "jack stud design"], "0_ncluster": 2, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "ae75439bc445400696766ec73cdf8625", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design", "jack stud design"], "1_ncluster": 2, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
3
+ {"tstamp": 1723409942.498, "task_type": "clustering", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f26db202e63e41289a84e01ff79a1a2e", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": ["I-Joist Spacing", "Floor span", "shear wall design", "opening width", "jack studs design", "subfloor thickness", "header thickness", "soil capacity", "footing with stem wall", "bearing capacity", "frost depth", "anchor bolt spacing", "SEER rating", "R-value", "water efficiency", "energy consumption"], "0_ncluster": 4, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "eaf2988a721845c695bca77835c10fec", "1_model_name": "voyage-multilingual-2", "1_prompt": ["I-Joist Spacing", "Floor span", "shear wall design", "opening width", "jack studs design", "subfloor thickness", "header thickness", "soil capacity", "footing with stem wall", "bearing capacity", "frost depth", "anchor bolt spacing", "SEER rating", "R-value", "water efficiency", "energy consumption"], "1_ncluster": 4, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
data/clustering_individual-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -40,3 +40,5 @@
40
  {"tstamp": 1723409403.2207, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723409403.138, "finish": 1723409403.2207, "ip": "", "conv_id": "ae75439bc445400696766ec73cdf8625", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
41
  {"tstamp": 1723409482.8391, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723409482.7583, "finish": 1723409482.8391, "ip": "", "conv_id": "98569eae160342459406a000ac1341bd", "model_name": "GritLM/GritLM-7B", "prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design", "jack stud design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
42
  {"tstamp": 1723409482.8391, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723409482.7583, "finish": 1723409482.8391, "ip": "", "conv_id": "ae75439bc445400696766ec73cdf8625", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design", "jack stud design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
 
 
 
40
  {"tstamp": 1723409403.2207, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723409403.138, "finish": 1723409403.2207, "ip": "", "conv_id": "ae75439bc445400696766ec73cdf8625", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
41
  {"tstamp": 1723409482.8391, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723409482.7583, "finish": 1723409482.8391, "ip": "", "conv_id": "98569eae160342459406a000ac1341bd", "model_name": "GritLM/GritLM-7B", "prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design", "jack stud design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
42
  {"tstamp": 1723409482.8391, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723409482.7583, "finish": 1723409482.8391, "ip": "", "conv_id": "ae75439bc445400696766ec73cdf8625", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design", "beam clear span", "anchor bolt design", "jack stud design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
43
+ {"tstamp": 1723409801.9819, "task_type": "clustering", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723409801.4405, "finish": 1723409801.9819, "ip": "", "conv_id": "f26db202e63e41289a84e01ff79a1a2e", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": ["I-Joist Spacing", "Floor span", "shear wall design", "opening width", "jack studs design", "subfloor thickness", "header thickness", "soil capacity", "footing with stem wall", "bearing capacity", "frost depth", "anchor bolt spacing", "SEER rating", "R-value", "water efficiency", "energy consumption"], "ncluster": 4, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
44
+ {"tstamp": 1723409801.9819, "task_type": "clustering", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1723409801.4405, "finish": 1723409801.9819, "ip": "", "conv_id": "eaf2988a721845c695bca77835c10fec", "model_name": "voyage-multilingual-2", "prompt": ["I-Joist Spacing", "Floor span", "shear wall design", "opening width", "jack studs design", "subfloor thickness", "header thickness", "soil capacity", "footing with stem wall", "bearing capacity", "frost depth", "anchor bolt spacing", "SEER rating", "R-value", "water efficiency", "energy consumption"], "ncluster": 4, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
data/retrieval_battle-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -59,3 +59,5 @@
59
  {"tstamp": 1723366112.8085, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7d91350d144543f190c415f6df188816", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "机器学习的主要算法有哪些?", "0_output": [["机器学习的主要算法有哪些?", "Machine learning\n\nMachine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.\nML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. When applied to business problems, it is known under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods."]], "0_corpus": "wikipedia", "1_conv_id": "c61618d0f41e4415af9f66d2d3c0165e", "1_model_name": "text-embedding-004", "1_prompt": "机器学习的主要算法有哪些?", "1_output": [["机器学习的主要算法有哪些?", "Interrogative word\n\nA particular type of interrogative word is the interrogative particle, which serves to convert a statement into a yes–no question, without having any other meaning. Examples include est-ce que in French, ли li in Russian, czy in Polish, чи chy in Ukrainian, ĉu in Esperanto, āyā آیا in Persian, কি ki in Bengali, / ma in Mandarin Chinese, ///Finnish and Turkish have vowel harmony, see more here in Turkish, pa in Ladin, ka in Japanese, kka in Korean, ko/kö in Finnish, (да) ли (da) li in Serbo-Croatian and and in Basque. \"Is it true that...\" and \"... right?\" would be a similar construct in English. Such particles contrast with other interrogative words, which form what are called wh-questions rather than yes–no questions.\nFor more information about the grammatical rules for using formed questions in various languages, see Interrogative.\nIn English\nInterrogative words in English can serve as interrogative determiners, interrogative pronouns, or interrogative adverbs. Certain pronominal adverbs may also be used as interrogative words, such as whereby or wherefore.\nInterrogative determiner\nThe interrogative words which, what, and whose are interrogative determiners when used to prompt the specification of a presented noun or noun phrase such as in the question Which farm is the largest? where the interrogative determiner which prompts specification of the noun farm. In the question Whose gorgeous, pink painting is that?, whose is the interrogative, personal, possessive determiner prompting a specification for the possessor of the noun phrase gorgeous pink painting."]], "1_corpus": "wikipedia"}
60
  {"tstamp": 1723366146.8638, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "44af829c4c324ab8b7f6e85f17c8c65e", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "机器学习的主要算法有哪些?", "0_output": [["机器学习的主要算法有哪些?", "Machine learning\n\nMachine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.\nML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. When applied to business problems, it is known under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods."]], "0_corpus": "wikipedia", "1_conv_id": "6109dc31bdbb4daea28cbf565be982bb", "1_model_name": "text-embedding-3-large", "1_prompt": "机器学习的主要算法有哪些?", "1_output": [["机器学习的主要算法有哪些?", "Outline of machine learning\n\nALOPEX Backpropagation\nBootstrap aggregating\nCN2 algorithm\nConstructing skill trees\nDehaene–Changeux model\nDiffusion map\nDominance-based rough set approach\nDynamic time warping\nError-driven learning\nEvolutionary multimodal optimization\nExpectation–maximization algorithm\nFastICA\nForward–backward algorithm\nGeneRec\nGenetic Algorithm for Rule Set Production\nGrowing self-organizing map\nHyper basis function network\nIDistance\nk-nearest neighbors algorithm\nKernel methods for vector output\nKernel principal component analysis\nLeabra\nLinde–Buzo–Gray algorithm\nLocal outlier factor\nLogic learning machine\nLogitBoost\nManifold alignment\nMarkov chain Monte Carlo (MCMC)\nMinimum redundancy feature selection\nMixture of experts\nMultiple kernel learning\nNon-negative matrix factorization\nOnline machine learning\nOut-of-bag error\nPrefrontal cortex basal ganglia working memory\nPVLV\nQ-learning\nQuadratic unconstrained binary optimization\nQuery-level feature\nQuickprop\nRadial basis function network\nRandomized weighted majority algorithm\nReinforcement learning\nRepeated incremental pruning to produce error reduction (RIPPER)\nRprop\nRule-based machine learning\nSkill chaining\nSparse PCA\nState–action–reward–state–action\nStochastic gradient descent\nStructured kNN\nT-distributed stochastic neighbor embedding\nTemporal difference learning\nWake-sleep algorithm\nWeighted majority algorithm (machine learning)\nMachine learning methods\nInstance-based algorithm\nK-nearest neighbors algorithm (KNN)\nLearning vector quantization (LVQ)\nSelf-organizing map (SOM)\nRegression analysis\nLogistic regression\nOrdinary least squares regression (OLSR)\nLinear regression\nStepwise regression\nMultivariate adaptive regression splines (MARS)\nRegularization algorithm\nRidge regression\nLeast Absolute Shrinkage and Selection Operator (LASSO)\nElastic net\nLeast-angle regression (LARS)\nClassifiers\nProbabilistic classifier\nNaive Bayes classifier\nBinary classifier"]], "1_corpus": "wikipedia"}
61
  {"tstamp": 1723409126.2508, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "76f6965b78ea4ab295ced0ec506be1da", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "dbdb49caa4964061884673a7efac3eb9", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
 
 
 
59
  {"tstamp": 1723366112.8085, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7d91350d144543f190c415f6df188816", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "机器学习的主要算法有哪些?", "0_output": [["机器学习的主要算法有哪些?", "Machine learning\n\nMachine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.\nML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. When applied to business problems, it is known under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods."]], "0_corpus": "wikipedia", "1_conv_id": "c61618d0f41e4415af9f66d2d3c0165e", "1_model_name": "text-embedding-004", "1_prompt": "机器学习的主要算法有哪些?", "1_output": [["机器学习的主要算法有哪些?", "Interrogative word\n\nA particular type of interrogative word is the interrogative particle, which serves to convert a statement into a yes–no question, without having any other meaning. Examples include est-ce que in French, ли li in Russian, czy in Polish, чи chy in Ukrainian, ĉu in Esperanto, āyā آیا in Persian, কি ki in Bengali, / ma in Mandarin Chinese, ///Finnish and Turkish have vowel harmony, see more here in Turkish, pa in Ladin, ka in Japanese, kka in Korean, ko/kö in Finnish, (да) ли (da) li in Serbo-Croatian and and in Basque. \"Is it true that...\" and \"... right?\" would be a similar construct in English. Such particles contrast with other interrogative words, which form what are called wh-questions rather than yes–no questions.\nFor more information about the grammatical rules for using formed questions in various languages, see Interrogative.\nIn English\nInterrogative words in English can serve as interrogative determiners, interrogative pronouns, or interrogative adverbs. Certain pronominal adverbs may also be used as interrogative words, such as whereby or wherefore.\nInterrogative determiner\nThe interrogative words which, what, and whose are interrogative determiners when used to prompt the specification of a presented noun or noun phrase such as in the question Which farm is the largest? where the interrogative determiner which prompts specification of the noun farm. In the question Whose gorgeous, pink painting is that?, whose is the interrogative, personal, possessive determiner prompting a specification for the possessor of the noun phrase gorgeous pink painting."]], "1_corpus": "wikipedia"}
60
  {"tstamp": 1723366146.8638, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "44af829c4c324ab8b7f6e85f17c8c65e", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "机器学习的主要算法有哪些?", "0_output": [["机器学习的主要算法有哪些?", "Machine learning\n\nMachine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.\nML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. When applied to business problems, it is known under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods."]], "0_corpus": "wikipedia", "1_conv_id": "6109dc31bdbb4daea28cbf565be982bb", "1_model_name": "text-embedding-3-large", "1_prompt": "机器学习的主要算法有哪些?", "1_output": [["机器学习的主要算法有哪些?", "Outline of machine learning\n\nALOPEX Backpropagation\nBootstrap aggregating\nCN2 algorithm\nConstructing skill trees\nDehaene–Changeux model\nDiffusion map\nDominance-based rough set approach\nDynamic time warping\nError-driven learning\nEvolutionary multimodal optimization\nExpectation–maximization algorithm\nFastICA\nForward–backward algorithm\nGeneRec\nGenetic Algorithm for Rule Set Production\nGrowing self-organizing map\nHyper basis function network\nIDistance\nk-nearest neighbors algorithm\nKernel methods for vector output\nKernel principal component analysis\nLeabra\nLinde–Buzo–Gray algorithm\nLocal outlier factor\nLogic learning machine\nLogitBoost\nManifold alignment\nMarkov chain Monte Carlo (MCMC)\nMinimum redundancy feature selection\nMixture of experts\nMultiple kernel learning\nNon-negative matrix factorization\nOnline machine learning\nOut-of-bag error\nPrefrontal cortex basal ganglia working memory\nPVLV\nQ-learning\nQuadratic unconstrained binary optimization\nQuery-level feature\nQuickprop\nRadial basis function network\nRandomized weighted majority algorithm\nReinforcement learning\nRepeated incremental pruning to produce error reduction (RIPPER)\nRprop\nRule-based machine learning\nSkill chaining\nSparse PCA\nState–action–reward–state–action\nStochastic gradient descent\nStructured kNN\nT-distributed stochastic neighbor embedding\nTemporal difference learning\nWake-sleep algorithm\nWeighted majority algorithm (machine learning)\nMachine learning methods\nInstance-based algorithm\nK-nearest neighbors algorithm (KNN)\nLearning vector quantization (LVQ)\nSelf-organizing map (SOM)\nRegression analysis\nLogistic regression\nOrdinary least squares regression (OLSR)\nLinear regression\nStepwise regression\nMultivariate adaptive regression splines (MARS)\nRegularization algorithm\nRidge regression\nLeast Absolute Shrinkage and Selection Operator (LASSO)\nElastic net\nLeast-angle regression (LARS)\nClassifiers\nProbabilistic classifier\nNaive Bayes classifier\nBinary classifier"]], "1_corpus": "wikipedia"}
61
  {"tstamp": 1723409126.2508, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "76f6965b78ea4ab295ced0ec506be1da", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "dbdb49caa4964061884673a7efac3eb9", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
62
+ {"tstamp": 1723409887.6995, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "a9e47624cc0440a5a49967dd35426979", "0_model_name": "text-embedding-3-large", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "48be1d13d2d94023927786ff4a3bd9fd", "1_model_name": "BM25", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Michael E. Fisher - teacher, mentor, colleague and friend: a (very) personal account\n\nAbstract: The only rational way of educating is to be an example. If one cant help it, a warning example. Albert Einstein. I had the good fortune and privilege of having Michael Fisher as my teacher, supervisor, mentor and friend. During my years as a scientist, teacher and supervisor of about one hundred students and post docs I found myself innumerable times realizing that I am following or at least trying to follow Michaels example. These pages are my attempt to convey recollections of my association with Michael, focusing on how he served as an example for me."]], "1_corpus": "arxiv"}
63
+ {"tstamp": 1723409922.5619, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "3b998157a560496cabc05570a1710f87", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "0_output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "Encyclopedia Galactica\n\nLater instances in fiction Various authors have invoked the Encyclopedia Galactica in both science and science fiction. The first may have been author Frank Holby's short story \"The Strange Case of the Missing Hero\" in the July 1942 issue of Astounding Science Fiction which featured Sebastian Lelong, editor of the Encyclopedia. It was also a common fixture in previous incarnations of the Legion of Super-Heroes comic books, and has appeared in the Star Wars expanded universe and Superman comics set in the future. The \"Encyclopedia Galactica\" was also mentioned as being a collection of all the knowledge of a galactic Empire in the 1989 science fiction short story \"The Originist\" by American novelist Orson Scott Card, and took place in Isaac Asimov's fictional \"Foundation\" Universe.\nIn the comic science fiction series by Douglas Adams, the Galactica is frequently contrasted with the apparently more popular Hitchhiker's Guide to the Galaxy:\nRobert A. Heinlein mentioned the Encyclopedia in chapter three of To Sail Beyond the Sunset (1987): \"... the computer that led the Lunar Revolution on time line three, code 'Neil Armstrong.' Let's skip the details; it's all in Encyclopedia Galacta (sic) and other books.\"\nIn Arthur C. Clarke's and Gentry Lee's novel Rama II (1989), Nicole des Jardins says to Richard Wakefield, \"Just think, the sum of everything all human beings know or have ever known might be nothing more than an infinitesimal fraction of the Encyclopedia Galactica.\"\n\"Encyclopedia Galactica\" is mentioned by Charlie Sheen's character in The Arrival (1996), and by Jodie Foster's character in Contact (1997)."]], "0_corpus": "wikipedia", "1_conv_id": "f719995207d3400c808616df9d5d23cf", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "1_output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "Guide\n\nA guide is a person who leads travelers, sportspeople, or tourists through unknown or unfamiliar locations. The term can also be applied to a person who leads others to more abstract goals such as knowledge or wisdom."]], "1_corpus": "wikipedia"}
data/retrieval_individual-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -187,3 +187,5 @@
187
  {"tstamp": 1723409117.7031, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723409115.362, "finish": 1723409117.7031, "ip": "", "conv_id": "dbdb49caa4964061884673a7efac3eb9", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
188
  {"tstamp": 1723409595.6599, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1723409594.7419, "finish": 1723409595.6599, "ip": "", "conv_id": "a9e47624cc0440a5a49967dd35426979", "model_name": "text-embedding-3-large", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
189
  {"tstamp": 1723409595.6599, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1723409594.7419, "finish": 1723409595.6599, "ip": "", "conv_id": "48be1d13d2d94023927786ff4a3bd9fd", "model_name": "BM25", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Michael E. Fisher - teacher, mentor, colleague and friend: a (very) personal account\n\nAbstract: The only rational way of educating is to be an example. If one cant help it, a warning example. Albert Einstein. I had the good fortune and privilege of having Michael Fisher as my teacher, supervisor, mentor and friend. During my years as a scientist, teacher and supervisor of about one hundred students and post docs I found myself innumerable times realizing that I am following or at least trying to follow Michaels example. These pages are my attempt to convey recollections of my association with Michael, focusing on how he served as an example for me."]], "corpus": "arxiv"}
 
 
 
187
  {"tstamp": 1723409117.7031, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723409115.362, "finish": 1723409117.7031, "ip": "", "conv_id": "dbdb49caa4964061884673a7efac3eb9", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
188
  {"tstamp": 1723409595.6599, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1723409594.7419, "finish": 1723409595.6599, "ip": "", "conv_id": "a9e47624cc0440a5a49967dd35426979", "model_name": "text-embedding-3-large", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
189
  {"tstamp": 1723409595.6599, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1723409594.7419, "finish": 1723409595.6599, "ip": "", "conv_id": "48be1d13d2d94023927786ff4a3bd9fd", "model_name": "BM25", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Michael E. Fisher - teacher, mentor, colleague and friend: a (very) personal account\n\nAbstract: The only rational way of educating is to be an example. If one cant help it, a warning example. Albert Einstein. I had the good fortune and privilege of having Michael Fisher as my teacher, supervisor, mentor and friend. During my years as a scientist, teacher and supervisor of about one hundred students and post docs I found myself innumerable times realizing that I am following or at least trying to follow Michaels example. These pages are my attempt to convey recollections of my association with Michael, focusing on how he served as an example for me."]], "corpus": "arxiv"}
190
+ {"tstamp": 1723409913.0612, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723409912.8428, "finish": 1723409913.0612, "ip": "", "conv_id": "3b998157a560496cabc05570a1710f87", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "Encyclopedia Galactica\n\nLater instances in fiction Various authors have invoked the Encyclopedia Galactica in both science and science fiction. The first may have been author Frank Holby's short story \"The Strange Case of the Missing Hero\" in the July 1942 issue of Astounding Science Fiction which featured Sebastian Lelong, editor of the Encyclopedia. It was also a common fixture in previous incarnations of the Legion of Super-Heroes comic books, and has appeared in the Star Wars expanded universe and Superman comics set in the future. The \"Encyclopedia Galactica\" was also mentioned as being a collection of all the knowledge of a galactic Empire in the 1989 science fiction short story \"The Originist\" by American novelist Orson Scott Card, and took place in Isaac Asimov's fictional \"Foundation\" Universe.\nIn the comic science fiction series by Douglas Adams, the Galactica is frequently contrasted with the apparently more popular Hitchhiker's Guide to the Galaxy:\nRobert A. Heinlein mentioned the Encyclopedia in chapter three of To Sail Beyond the Sunset (1987): \"... the computer that led the Lunar Revolution on time line three, code 'Neil Armstrong.' Let's skip the details; it's all in Encyclopedia Galacta (sic) and other books.\"\nIn Arthur C. Clarke's and Gentry Lee's novel Rama II (1989), Nicole des Jardins says to Richard Wakefield, \"Just think, the sum of everything all human beings know or have ever known might be nothing more than an infinitesimal fraction of the Encyclopedia Galactica.\"\n\"Encyclopedia Galactica\" is mentioned by Charlie Sheen's character in The Arrival (1996), and by Jodie Foster's character in Contact (1997)."]], "corpus": "wikipedia"}
191
+ {"tstamp": 1723409913.0612, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1723409912.8428, "finish": 1723409913.0612, "ip": "", "conv_id": "f719995207d3400c808616df9d5d23cf", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "Guide\n\nA guide is a person who leads travelers, sportspeople, or tourists through unknown or unfamiliar locations. The term can also be applied to a person who leads others to more abstract goals such as knowledge or wisdom."]], "corpus": "wikipedia"}