Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
6f5495d
·
verified ·
1 Parent(s): 8bdf494

Scheduled Commit

Browse files
data/clustering_individual-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -26,3 +26,11 @@
26
  {"tstamp": 1723366278.5651, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723366278.5194, "finish": 1723366278.5651, "ip": "", "conv_id": "048c18c5f78944eaa5c9d01d76d57300", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["JT-QP-01 文件控制程序"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
27
  {"tstamp": 1723366291.6769, "task_type": "clustering", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1723366291.3896, "finish": 1723366291.6769, "ip": "", "conv_id": "0cfc2ce80b0e4c7db1b0ad5f10162fb0", "model_name": "embed-english-v3.0", "prompt": ["JT-QP-01 文件控制程序", "temperate", "tropical", "boreal", "physics", "mathematics", "history", "biology", "rake", "hoe", "watering can", "shovel", "trowel", "wheelbarrow", "cowboy hat", "beanie", "fantasy", "science fiction", "mystery"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
28
  {"tstamp": 1723366291.6769, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723366291.3896, "finish": 1723366291.6769, "ip": "", "conv_id": "048c18c5f78944eaa5c9d01d76d57300", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["JT-QP-01 文件控制程序", "temperate", "tropical", "boreal", "physics", "mathematics", "history", "biology", "rake", "hoe", "watering can", "shovel", "trowel", "wheelbarrow", "cowboy hat", "beanie", "fantasy", "science fiction", "mystery"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
 
 
 
 
 
 
 
 
 
26
  {"tstamp": 1723366278.5651, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723366278.5194, "finish": 1723366278.5651, "ip": "", "conv_id": "048c18c5f78944eaa5c9d01d76d57300", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["JT-QP-01 文件控制程序"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
27
  {"tstamp": 1723366291.6769, "task_type": "clustering", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1723366291.3896, "finish": 1723366291.6769, "ip": "", "conv_id": "0cfc2ce80b0e4c7db1b0ad5f10162fb0", "model_name": "embed-english-v3.0", "prompt": ["JT-QP-01 文件控制程序", "temperate", "tropical", "boreal", "physics", "mathematics", "history", "biology", "rake", "hoe", "watering can", "shovel", "trowel", "wheelbarrow", "cowboy hat", "beanie", "fantasy", "science fiction", "mystery"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
28
  {"tstamp": 1723366291.6769, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723366291.3896, "finish": 1723366291.6769, "ip": "", "conv_id": "048c18c5f78944eaa5c9d01d76d57300", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["JT-QP-01 文件控制程序", "temperate", "tropical", "boreal", "physics", "mathematics", "history", "biology", "rake", "hoe", "watering can", "shovel", "trowel", "wheelbarrow", "cowboy hat", "beanie", "fantasy", "science fiction", "mystery"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
29
+ {"tstamp": 1723409272.2761, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723409272.2303, "finish": 1723409272.2761, "ip": "", "conv_id": "98569eae160342459406a000ac1341bd", "model_name": "GritLM/GritLM-7B", "prompt": ["shear wall design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
30
+ {"tstamp": 1723409272.2761, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723409272.2303, "finish": 1723409272.2761, "ip": "", "conv_id": "ae75439bc445400696766ec73cdf8625", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["shear wall design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
31
+ {"tstamp": 1723409284.9904, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723409284.9024, "finish": 1723409284.9904, "ip": "", "conv_id": "98569eae160342459406a000ac1341bd", "model_name": "GritLM/GritLM-7B", "prompt": ["shear wall design", "window header"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
32
+ {"tstamp": 1723409284.9904, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723409284.9024, "finish": 1723409284.9904, "ip": "", "conv_id": "ae75439bc445400696766ec73cdf8625", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["shear wall design", "window header"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
33
+ {"tstamp": 1723409292.0951, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723409292.0161, "finish": 1723409292.0951, "ip": "", "conv_id": "98569eae160342459406a000ac1341bd", "model_name": "GritLM/GritLM-7B", "prompt": ["shear wall design", "window header", "subfloor spacinng"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
34
+ {"tstamp": 1723409292.0951, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723409292.0161, "finish": 1723409292.0951, "ip": "", "conv_id": "ae75439bc445400696766ec73cdf8625", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["shear wall design", "window header", "subfloor spacinng"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
35
+ {"tstamp": 1723409310.5352, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723409310.4564, "finish": 1723409310.5352, "ip": "", "conv_id": "98569eae160342459406a000ac1341bd", "model_name": "GritLM/GritLM-7B", "prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
36
+ {"tstamp": 1723409310.5352, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723409310.4564, "finish": 1723409310.5352, "ip": "", "conv_id": "ae75439bc445400696766ec73cdf8625", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["shear wall design", "window header", "subfloor spacinng", "joist design"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
data/retrieval_battle-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -58,3 +58,4 @@
58
  {"tstamp": 1723365905.027, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "0941b7c830b8467a998a3502f577a45b", "0_model_name": "text-embedding-004", "0_prompt": "小明吃了小月的面包后把面包包装还给了小明,请问面包包装现在在谁手上?", "0_output": [["小明吃了小月的面包后把面包包装还给了小明,请问面包包装现在在谁手上?", "Chinese pronouns\n\nOther, rarer new written pronouns in the second person are nǐ ( \"you, a deity\"), nǐ ( \"you, a male\"), and nǐ ( \"you, a female\"). In the third person, they are tā ( \"it, an animal\"), tā ( \"it, a deity\"), and tā ( \"it, an inanimate object\"). Among users of traditional Chinese characters, these distinctions are only made in Taiwanese Mandarin; in simplified Chinese, tā () is the only third-person non-human form and nǐ () is the only second person form. The third person distinction between \"he\" () and \"she\" () remain in use in all forms of written standard Mandarin.\nIn the early 21st century, some members of genderfluid and queer Chinese online communities started using X也 and TA to refer to a generic, anonymous, or non-binary third person. As of June 2022, neither have been encoded as a single code point in Unicode, and neither are considered standard usage. Since at least 2014, Bilibili has used TA in its user pages.\nAdditional notes\nThe first-person pronouns ǎn and ǒu \"I\" are infrequently used in Mandarin conversation. They are of dialectal origin. However, their usage is gaining popularity among the young, most notably in online communications.\nAccording to Wang Li, the second person formal pronoun nín ( \"you, formal; polite\") is derived from the fusion of the second person plural nǐmen ( \"you, formal; polite\"), making it somewhat analogous to the T-V distinction in Romance languages. Consistent with this hypothesized origin, *nínmen is traditionally considered to be a grammatically incorrect expression for the formal second person plural. Instead, the alternative phrases dàjiā (大家, \"you, formal plural\") and gèwèi (各位, \"you, formal plural\") are used, with the latter being somewhat more formal than the former. In addition, some dialects use an analogous formal third person pronoun tān (怹, \"he/she, formal; polite\")."]], "0_corpus": "wikipedia", "1_conv_id": "8d239d2ff1114e7cb3a6260e4712393c", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "小明吃了小月的面包后把面包包装还给了小明,请问面包包装现在在谁手上?", "1_output": [["小明吃了小月的面包后把面包包装还给了小明,请问面包包装现在在谁手上?", "Butterbrot\n\nNormally, сэндвич Runglish-y word for sandwich is used in Russian for two slices of bread with some ingredients in between, and the very word \"sandwich\" implies \"flat X between two flat Y\" idea in Runglish (see \"сэндвич-панель\").\nHowever, open sandwich is a \"бутерброд\" for a Russian.\nUrban legends\nButterbrot is said to always fall to the floor (and especially on carpet) with the buttered side downwards; an example of Murphy's law. A common explanation is that the top side is usually heavier than the bottom side, particularly if the bread has additional toppings such as a spread. Another is tied to the common height of tables. The subject has been researched by various sources, including the German children's series Die Sendung mit der Maus, and the scientific German TV series Quarks & Co.\nIt is often joked about what would happen if Butterbrot is tied to the back of a cat, in the same manner that hypothetical buttered toast attached to the back of a cat is sometimes joked about, with it being debated whether the cat would still honour the popular axiom that a cat \"always lands on its feet\", or if the Butterbrot would be \"stronger\", making the cat fall on its back — alternatively, it is sometimes humorously suggested that the cat would simply levitate, as it would be unable to satisfy both criteria for landing."]], "1_corpus": "wikipedia"}
59
  {"tstamp": 1723366112.8085, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7d91350d144543f190c415f6df188816", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "机器学习的主要算法有哪些?", "0_output": [["机器学习的主要算法有哪些?", "Machine learning\n\nMachine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.\nML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. When applied to business problems, it is known under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods."]], "0_corpus": "wikipedia", "1_conv_id": "c61618d0f41e4415af9f66d2d3c0165e", "1_model_name": "text-embedding-004", "1_prompt": "机器学习的主要算法有哪些?", "1_output": [["机器学习的主要算法有哪些?", "Interrogative word\n\nA particular type of interrogative word is the interrogative particle, which serves to convert a statement into a yes–no question, without having any other meaning. Examples include est-ce que in French, ли li in Russian, czy in Polish, чи chy in Ukrainian, ĉu in Esperanto, āyā آیا in Persian, কি ki in Bengali, / ma in Mandarin Chinese, ///Finnish and Turkish have vowel harmony, see more here in Turkish, pa in Ladin, ka in Japanese, kka in Korean, ko/kö in Finnish, (да) ли (da) li in Serbo-Croatian and and in Basque. \"Is it true that...\" and \"... right?\" would be a similar construct in English. Such particles contrast with other interrogative words, which form what are called wh-questions rather than yes–no questions.\nFor more information about the grammatical rules for using formed questions in various languages, see Interrogative.\nIn English\nInterrogative words in English can serve as interrogative determiners, interrogative pronouns, or interrogative adverbs. Certain pronominal adverbs may also be used as interrogative words, such as whereby or wherefore.\nInterrogative determiner\nThe interrogative words which, what, and whose are interrogative determiners when used to prompt the specification of a presented noun or noun phrase such as in the question Which farm is the largest? where the interrogative determiner which prompts specification of the noun farm. In the question Whose gorgeous, pink painting is that?, whose is the interrogative, personal, possessive determiner prompting a specification for the possessor of the noun phrase gorgeous pink painting."]], "1_corpus": "wikipedia"}
60
  {"tstamp": 1723366146.8638, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "44af829c4c324ab8b7f6e85f17c8c65e", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "机器学习的主要算法有哪些?", "0_output": [["机器学习的主要算法有哪些?", "Machine learning\n\nMachine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.\nML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. When applied to business problems, it is known under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods."]], "0_corpus": "wikipedia", "1_conv_id": "6109dc31bdbb4daea28cbf565be982bb", "1_model_name": "text-embedding-3-large", "1_prompt": "机器学习的主要算法有哪些?", "1_output": [["机器学习的主要算法有哪些?", "Outline of machine learning\n\nALOPEX Backpropagation\nBootstrap aggregating\nCN2 algorithm\nConstructing skill trees\nDehaene–Changeux model\nDiffusion map\nDominance-based rough set approach\nDynamic time warping\nError-driven learning\nEvolutionary multimodal optimization\nExpectation–maximization algorithm\nFastICA\nForward–backward algorithm\nGeneRec\nGenetic Algorithm for Rule Set Production\nGrowing self-organizing map\nHyper basis function network\nIDistance\nk-nearest neighbors algorithm\nKernel methods for vector output\nKernel principal component analysis\nLeabra\nLinde–Buzo–Gray algorithm\nLocal outlier factor\nLogic learning machine\nLogitBoost\nManifold alignment\nMarkov chain Monte Carlo (MCMC)\nMinimum redundancy feature selection\nMixture of experts\nMultiple kernel learning\nNon-negative matrix factorization\nOnline machine learning\nOut-of-bag error\nPrefrontal cortex basal ganglia working memory\nPVLV\nQ-learning\nQuadratic unconstrained binary optimization\nQuery-level feature\nQuickprop\nRadial basis function network\nRandomized weighted majority algorithm\nReinforcement learning\nRepeated incremental pruning to produce error reduction (RIPPER)\nRprop\nRule-based machine learning\nSkill chaining\nSparse PCA\nState–action–reward–state–action\nStochastic gradient descent\nStructured kNN\nT-distributed stochastic neighbor embedding\nTemporal difference learning\nWake-sleep algorithm\nWeighted majority algorithm (machine learning)\nMachine learning methods\nInstance-based algorithm\nK-nearest neighbors algorithm (KNN)\nLearning vector quantization (LVQ)\nSelf-organizing map (SOM)\nRegression analysis\nLogistic regression\nOrdinary least squares regression (OLSR)\nLinear regression\nStepwise regression\nMultivariate adaptive regression splines (MARS)\nRegularization algorithm\nRidge regression\nLeast Absolute Shrinkage and Selection Operator (LASSO)\nElastic net\nLeast-angle regression (LARS)\nClassifiers\nProbabilistic classifier\nNaive Bayes classifier\nBinary classifier"]], "1_corpus": "wikipedia"}
 
 
58
  {"tstamp": 1723365905.027, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "0941b7c830b8467a998a3502f577a45b", "0_model_name": "text-embedding-004", "0_prompt": "小明吃了小月的面包后把面包包装还给了小明,请问面包包装现在在谁手上?", "0_output": [["小明吃了小月的面包后把面包包装还给了小明,请问面包包装现在在谁手上?", "Chinese pronouns\n\nOther, rarer new written pronouns in the second person are nǐ ( \"you, a deity\"), nǐ ( \"you, a male\"), and nǐ ( \"you, a female\"). In the third person, they are tā ( \"it, an animal\"), tā ( \"it, a deity\"), and tā ( \"it, an inanimate object\"). Among users of traditional Chinese characters, these distinctions are only made in Taiwanese Mandarin; in simplified Chinese, tā () is the only third-person non-human form and nǐ () is the only second person form. The third person distinction between \"he\" () and \"she\" () remain in use in all forms of written standard Mandarin.\nIn the early 21st century, some members of genderfluid and queer Chinese online communities started using X也 and TA to refer to a generic, anonymous, or non-binary third person. As of June 2022, neither have been encoded as a single code point in Unicode, and neither are considered standard usage. Since at least 2014, Bilibili has used TA in its user pages.\nAdditional notes\nThe first-person pronouns ǎn and ǒu \"I\" are infrequently used in Mandarin conversation. They are of dialectal origin. However, their usage is gaining popularity among the young, most notably in online communications.\nAccording to Wang Li, the second person formal pronoun nín ( \"you, formal; polite\") is derived from the fusion of the second person plural nǐmen ( \"you, formal; polite\"), making it somewhat analogous to the T-V distinction in Romance languages. Consistent with this hypothesized origin, *nínmen is traditionally considered to be a grammatically incorrect expression for the formal second person plural. Instead, the alternative phrases dàjiā (大家, \"you, formal plural\") and gèwèi (各位, \"you, formal plural\") are used, with the latter being somewhat more formal than the former. In addition, some dialects use an analogous formal third person pronoun tān (怹, \"he/she, formal; polite\")."]], "0_corpus": "wikipedia", "1_conv_id": "8d239d2ff1114e7cb3a6260e4712393c", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "小明吃了小月的面包后把面包包装还给了小明,请问面包包装现在在谁手上?", "1_output": [["小明吃了小月的面包后把面包包装还给了小明,请问面包包装现在在谁手上?", "Butterbrot\n\nNormally, сэндвич Runglish-y word for sandwich is used in Russian for two slices of bread with some ingredients in between, and the very word \"sandwich\" implies \"flat X between two flat Y\" idea in Runglish (see \"сэндвич-панель\").\nHowever, open sandwich is a \"бутерброд\" for a Russian.\nUrban legends\nButterbrot is said to always fall to the floor (and especially on carpet) with the buttered side downwards; an example of Murphy's law. A common explanation is that the top side is usually heavier than the bottom side, particularly if the bread has additional toppings such as a spread. Another is tied to the common height of tables. The subject has been researched by various sources, including the German children's series Die Sendung mit der Maus, and the scientific German TV series Quarks & Co.\nIt is often joked about what would happen if Butterbrot is tied to the back of a cat, in the same manner that hypothetical buttered toast attached to the back of a cat is sometimes joked about, with it being debated whether the cat would still honour the popular axiom that a cat \"always lands on its feet\", or if the Butterbrot would be \"stronger\", making the cat fall on its back — alternatively, it is sometimes humorously suggested that the cat would simply levitate, as it would be unable to satisfy both criteria for landing."]], "1_corpus": "wikipedia"}
59
  {"tstamp": 1723366112.8085, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7d91350d144543f190c415f6df188816", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "机器学习的主要算法有哪些?", "0_output": [["机器学习的主要算法有哪些?", "Machine learning\n\nMachine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.\nML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. When applied to business problems, it is known under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods."]], "0_corpus": "wikipedia", "1_conv_id": "c61618d0f41e4415af9f66d2d3c0165e", "1_model_name": "text-embedding-004", "1_prompt": "机器学习的主要算法有哪些?", "1_output": [["机器学习的主要算法有哪些?", "Interrogative word\n\nA particular type of interrogative word is the interrogative particle, which serves to convert a statement into a yes–no question, without having any other meaning. Examples include est-ce que in French, ли li in Russian, czy in Polish, чи chy in Ukrainian, ĉu in Esperanto, āyā آیا in Persian, কি ki in Bengali, / ma in Mandarin Chinese, ///Finnish and Turkish have vowel harmony, see more here in Turkish, pa in Ladin, ka in Japanese, kka in Korean, ko/kö in Finnish, (да) ли (da) li in Serbo-Croatian and and in Basque. \"Is it true that...\" and \"... right?\" would be a similar construct in English. Such particles contrast with other interrogative words, which form what are called wh-questions rather than yes–no questions.\nFor more information about the grammatical rules for using formed questions in various languages, see Interrogative.\nIn English\nInterrogative words in English can serve as interrogative determiners, interrogative pronouns, or interrogative adverbs. Certain pronominal adverbs may also be used as interrogative words, such as whereby or wherefore.\nInterrogative determiner\nThe interrogative words which, what, and whose are interrogative determiners when used to prompt the specification of a presented noun or noun phrase such as in the question Which farm is the largest? where the interrogative determiner which prompts specification of the noun farm. In the question Whose gorgeous, pink painting is that?, whose is the interrogative, personal, possessive determiner prompting a specification for the possessor of the noun phrase gorgeous pink painting."]], "1_corpus": "wikipedia"}
60
  {"tstamp": 1723366146.8638, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "44af829c4c324ab8b7f6e85f17c8c65e", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "机器学习的主要算法有哪些?", "0_output": [["机器学习的主要算法有哪些?", "Machine learning\n\nMachine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.\nML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. When applied to business problems, it is known under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods."]], "0_corpus": "wikipedia", "1_conv_id": "6109dc31bdbb4daea28cbf565be982bb", "1_model_name": "text-embedding-3-large", "1_prompt": "机器学习的主要算法有哪些?", "1_output": [["机器学习的主要算法有哪些?", "Outline of machine learning\n\nALOPEX Backpropagation\nBootstrap aggregating\nCN2 algorithm\nConstructing skill trees\nDehaene–Changeux model\nDiffusion map\nDominance-based rough set approach\nDynamic time warping\nError-driven learning\nEvolutionary multimodal optimization\nExpectation–maximization algorithm\nFastICA\nForward–backward algorithm\nGeneRec\nGenetic Algorithm for Rule Set Production\nGrowing self-organizing map\nHyper basis function network\nIDistance\nk-nearest neighbors algorithm\nKernel methods for vector output\nKernel principal component analysis\nLeabra\nLinde–Buzo–Gray algorithm\nLocal outlier factor\nLogic learning machine\nLogitBoost\nManifold alignment\nMarkov chain Monte Carlo (MCMC)\nMinimum redundancy feature selection\nMixture of experts\nMultiple kernel learning\nNon-negative matrix factorization\nOnline machine learning\nOut-of-bag error\nPrefrontal cortex basal ganglia working memory\nPVLV\nQ-learning\nQuadratic unconstrained binary optimization\nQuery-level feature\nQuickprop\nRadial basis function network\nRandomized weighted majority algorithm\nReinforcement learning\nRepeated incremental pruning to produce error reduction (RIPPER)\nRprop\nRule-based machine learning\nSkill chaining\nSparse PCA\nState–action–reward–state–action\nStochastic gradient descent\nStructured kNN\nT-distributed stochastic neighbor embedding\nTemporal difference learning\nWake-sleep algorithm\nWeighted majority algorithm (machine learning)\nMachine learning methods\nInstance-based algorithm\nK-nearest neighbors algorithm (KNN)\nLearning vector quantization (LVQ)\nSelf-organizing map (SOM)\nRegression analysis\nLogistic regression\nOrdinary least squares regression (OLSR)\nLinear regression\nStepwise regression\nMultivariate adaptive regression splines (MARS)\nRegularization algorithm\nRidge regression\nLeast Absolute Shrinkage and Selection Operator (LASSO)\nElastic net\nLeast-angle regression (LARS)\nClassifiers\nProbabilistic classifier\nNaive Bayes classifier\nBinary classifier"]], "1_corpus": "wikipedia"}
61
+ {"tstamp": 1723409126.2508, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "76f6965b78ea4ab295ced0ec506be1da", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "dbdb49caa4964061884673a7efac3eb9", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
data/retrieval_individual-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -183,3 +183,5 @@
183
  {"tstamp": 1723393943.6531, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723393941.4366, "finish": 1723393943.6531, "ip": "", "conv_id": "c4d47e7117fb4d5187d54c5a9763e911", "model_name": "GritLM/GritLM-7B", "prompt": "hellow", "output": [["hellow", "Hello\n\nHello is a salutation or greeting in the English language. It is first attested in writing from 1826.\nEarly uses\nHello, with that spelling, was used in publications in the U.S. as early as the 18 October 1826 edition of the Norwich Courier of Norwich, Connecticut. Another early use was an 1833 American book called The Sketches and Eccentricities of Col. David Crockett, of West Tennessee, which was reprinted that same year in The London Literary Gazette. The word was extensively used in literature by the 1860s.\nEtymology\nAccording to the Oxford English Dictionary, hello is an alteration of hallo, hollo, which came from Old High German \"halâ, holâ, emphatic imperative of halôn, holôn to fetch, used especially in hailing a ferryman\". It also connects the development of hello to the influence of an earlier form, holla, whose origin is in the French holà (roughly, 'whoa there!', from French là 'there'). As in addition to hello, halloo, hallo, hollo, hullo and (rarely) hillo also exist as variants or related words, the word can be spelt using any of all five vowels.\nBill Bryson asserts in his book Mother Tongue that \"hello\" is a contraction of the Old English phrase hál béo þu (\"Hale be thou\", or \"whole be thou\", meaning a wish for good health; cf. \"goodbye\" which is a contraction of \"God be with ye\")."]], "corpus": "wikipedia"}
184
  {"tstamp": 1723400973.0637, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1723400970.4715, "finish": 1723400973.0637, "ip": "", "conv_id": "495f20f96bb0430b9bd0fce793fccf97", "model_name": "text-embedding-3-large", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
185
  {"tstamp": 1723400973.0637, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1723400970.4715, "finish": 1723400973.0637, "ip": "", "conv_id": "e7c0e184b5b944d7b205617825ffb9c6", "model_name": "BM25", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Artificial intelligence\n\nIn the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program was taught only the rules of the game and developed strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions in AI research. According to AI Impacts, about $50 billion annually was invested in \"AI\" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in \"AI\".\nAbout 800,000 \"AI\"-related U.S. job openings existed in 2022.\nPhilosophy\nDefining artificial intelligence\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\" He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\""]], "corpus": "wikipedia"}
 
 
 
183
  {"tstamp": 1723393943.6531, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723393941.4366, "finish": 1723393943.6531, "ip": "", "conv_id": "c4d47e7117fb4d5187d54c5a9763e911", "model_name": "GritLM/GritLM-7B", "prompt": "hellow", "output": [["hellow", "Hello\n\nHello is a salutation or greeting in the English language. It is first attested in writing from 1826.\nEarly uses\nHello, with that spelling, was used in publications in the U.S. as early as the 18 October 1826 edition of the Norwich Courier of Norwich, Connecticut. Another early use was an 1833 American book called The Sketches and Eccentricities of Col. David Crockett, of West Tennessee, which was reprinted that same year in The London Literary Gazette. The word was extensively used in literature by the 1860s.\nEtymology\nAccording to the Oxford English Dictionary, hello is an alteration of hallo, hollo, which came from Old High German \"halâ, holâ, emphatic imperative of halôn, holôn to fetch, used especially in hailing a ferryman\". It also connects the development of hello to the influence of an earlier form, holla, whose origin is in the French holà (roughly, 'whoa there!', from French là 'there'). As in addition to hello, halloo, hallo, hollo, hullo and (rarely) hillo also exist as variants or related words, the word can be spelt using any of all five vowels.\nBill Bryson asserts in his book Mother Tongue that \"hello\" is a contraction of the Old English phrase hál béo þu (\"Hale be thou\", or \"whole be thou\", meaning a wish for good health; cf. \"goodbye\" which is a contraction of \"God be with ye\")."]], "corpus": "wikipedia"}
184
  {"tstamp": 1723400973.0637, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1723400970.4715, "finish": 1723400973.0637, "ip": "", "conv_id": "495f20f96bb0430b9bd0fce793fccf97", "model_name": "text-embedding-3-large", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
185
  {"tstamp": 1723400973.0637, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1723400970.4715, "finish": 1723400973.0637, "ip": "", "conv_id": "e7c0e184b5b944d7b205617825ffb9c6", "model_name": "BM25", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Artificial intelligence\n\nIn the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program was taught only the rules of the game and developed strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions in AI research. According to AI Impacts, about $50 billion annually was invested in \"AI\" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in \"AI\".\nAbout 800,000 \"AI\"-related U.S. job openings existed in 2022.\nPhilosophy\nDefining artificial intelligence\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\" He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\""]], "corpus": "wikipedia"}
186
+ {"tstamp": 1723409117.7031, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723409115.362, "finish": 1723409117.7031, "ip": "", "conv_id": "76f6965b78ea4ab295ced0ec506be1da", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
187
+ {"tstamp": 1723409117.7031, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723409115.362, "finish": 1723409117.7031, "ip": "", "conv_id": "dbdb49caa4964061884673a7efac3eb9", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}