Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
89a02bb
·
verified ·
1 Parent(s): 91e8377

Scheduled Commit

Browse files
data/retrieval_battle-407c4836-37e2-4f9f-8e9b-06706cc4440c.jsonl CHANGED
@@ -17,3 +17,4 @@
17
  {"tstamp": 1722445718.3648, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "55af47c6ca9c4ac7a77ee1e484d2f648", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "0_output": [["Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "Title: Is Nothing Sacred? Vacuum Energy, Supersymmetry and Lorentz Breaking from Recoiling D branes\n\nAbstract: Classical superstring vacua have zero vacuum energy and are supersymmetric and Lorentz-invariant. We argue that all these properties may be destroyed when quantum aspects of the interactions between particles and non-perturbative vacuum fluctuations are considered. A toy calculation of string/D-brane interactions using a world-sheet approach indicates that quantum recoil effects - reflecting the gravitational back-reaction on space-time foam due to the propagation of energetic particles - induce non-zero vacuum energy that is linked to supersymmetry breaking and breaks Lorentz invariance. This model of space-time foam also suggests the appearance of microscopic event horizons."]], "0_corpus": "arxiv", "1_conv_id": "8b2315872e5949a7a35825b601d43977", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "1_output": [["Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "Title: Character-level Chinese-English Translation through ASCII Encoding\n\nAbstract: Character-level Neural Machine Translation (NMT) models have recently achieved impressive results on many language pairs. They mainly do well for Indo-European language pairs, where the languages share the same writing system. However, for translating between Chinese and English, the gap between the two different writing systems poses a major challenge because of a lack of systematic correspondence between the individual linguistic units. In this paper, we enable character-level NMT for Chinese, by breaking down Chinese characters into linguistic units similar to that of Indo-European languages. We use the Wubi encoding scheme, which preserves the original shape and semantic information of the characters, while also being reversible. We show promising results from training Wubi-based models on the character- and subword-level with recurrent as well as convolutional models."]], "1_corpus": "arxiv"}
18
  {"tstamp": 1722445796.0975, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "483dafda9ed54c9c80bc45216ad796db", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "what type of data is provided in a knowledge base", "0_output": [["what type of data is provided in a knowledge base", "W. Edwards Deming\n\nDeming continued to advise businesses large and small. From 1985 through 1989, Deming served as a consultant to Vernay Laboratories, a rubber manufacturing firm in Yellow Springs, Ohio, with fewer than 1,000 employees. He held several week-long seminars for employees and suppliers of the small company where his famous example \"Workers on the Red Beads\" spurred several major changes in Vernay's manufacturing processes.\nDeming joined the Graduate School of Business at Columbia University in 1988. In 1990, during his last year, he founded the W. Edwards Deming Center for Quality, Productivity, and Competitiveness at Columbia Business School to promote operational excellence in business through the development of research, best practices and strategic planning.\nIn 1990, Marshall Industries (NYSE:MI, 1984–1999) CEO Robert Rodin trained with the then 90-year-old Deming and his colleague Nida Backaitis. Marshall Industries' dramatic transformation and growth from $400 million to $1.8 billion in sales was chronicled in Deming's last book The New Economics, a Harvard Case Study, and Rodin's book, Free, Perfect and Now.\nIn 1993, Deming published his final book, The New Economics for Industry, Government, Education, which included the System of Profound Knowledge and the 14 Points for Management. It also contained educational concepts involving group-based teaching without grades, as well as management without individual merit or performance reviews.\nAcademic contributions\nThe philosophy of W. Edwards Deming has been summarized as follows:\nThe Deming System of Profound Knowledge"]], "0_corpus": "wikipedia", "1_conv_id": "3139be50f5064e0894343f6a22890c54", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "what type of data is provided in a knowledge base", "1_output": [["what type of data is provided in a knowledge base", "Knowledge base\n\nIn computer science, a knowledge base (KB) is a set of sentences, each sentence given in a knowledge representation language, with interfaces to tell new sentences and to ask questions about what is known, where either of these interfaces might use inference. It is a technology used to store complex structured data used by a computer system. The initial use of the term was in connection with expert systems, which were the first knowledge-based systems.\nOriginal usage of the term"]], "1_corpus": "wikipedia"}
19
  {"tstamp": 1722445834.473, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "ed8435cf63824a24a3b2372e6608f29d", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "how to change share name in windows server 2008?", "0_output": [["how to change share name in windows server 2008?", "Superuser Stackexchange\n\nQ: How to change the administrator account name from command prompt in Windows Server 2008 Server Core? I'm trying to change the administrator account name on my virtual machine for lab work purposes. I'm running Windows Server 2008 Server Core, therefore I'm only using the standard command prompts.\nHow can I do this?\n\nA: If it domain user install Active Directory Administration module for PowerShell. \ncommand line:\npowershell Import-module ActiveDirectory\n\nrename domain user -\nPowershell:\nGet-ADUser -Identity 'Administrator' | Rename-ADObject -NewName 'Partisan'\n\ncommand line:\npowershell Get-ADUser -Identity 'Administrator' ^| Rename-ADObject -NewName 'Partisan'\n\nvar 2:\ndsquery user -name Administrator | dsmove -newname \"Partisan\"\n\nlocal administrator -\nPowershell:\nRename-LocalUser -UserName 'Administrator' -NewUserName 'Partisan'\n\ncommand line:\npowershell Rename-LocalUser -UserName 'Administrator' -NewUserName 'Partisan'\n\nvar2:\nwmic UserAccount where Name=\"Administrator\" call Rename Name=\"Partisan\"\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "939f7aa837ce48d9b9700ee6346622d4", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "how to change share name in windows server 2008?", "1_output": [["how to change share name in windows server 2008?", "Serverfault Stackexchange\n\nQ: How do you configure a \\\\domain\\sharedFolder in windows 2008? Background: \n\n\n*\n\n*I've got several Windows 2008 servers setup by someone else.\n\n*They are all on one domain, lets call this mydomain\n\n*Several boxes are accessing a shared folder by going to \\\\mydomain\\share1\n\n*I found the actual files backing the share on box1\n\n*I need to move the location of this share to a larger drive for space reasons\n\n\nI'm familiar with setting up and managing shares like \\\\box1\\share1, but where do you manage/configure shares that are domain-wide; accessed like \\\\mydomain\\share1 ?\n\nA: It sounds like you have a domain based DFS configured, which is managed by the DFS Management utility.\n"]], "1_corpus": "stackexchange"}
 
 
17
  {"tstamp": 1722445718.3648, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "55af47c6ca9c4ac7a77ee1e484d2f648", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "0_output": [["Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "Title: Is Nothing Sacred? Vacuum Energy, Supersymmetry and Lorentz Breaking from Recoiling D branes\n\nAbstract: Classical superstring vacua have zero vacuum energy and are supersymmetric and Lorentz-invariant. We argue that all these properties may be destroyed when quantum aspects of the interactions between particles and non-perturbative vacuum fluctuations are considered. A toy calculation of string/D-brane interactions using a world-sheet approach indicates that quantum recoil effects - reflecting the gravitational back-reaction on space-time foam due to the propagation of energetic particles - induce non-zero vacuum energy that is linked to supersymmetry breaking and breaks Lorentz invariance. This model of space-time foam also suggests the appearance of microscopic event horizons."]], "0_corpus": "arxiv", "1_conv_id": "8b2315872e5949a7a35825b601d43977", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "1_output": [["Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "Title: Character-level Chinese-English Translation through ASCII Encoding\n\nAbstract: Character-level Neural Machine Translation (NMT) models have recently achieved impressive results on many language pairs. They mainly do well for Indo-European language pairs, where the languages share the same writing system. However, for translating between Chinese and English, the gap between the two different writing systems poses a major challenge because of a lack of systematic correspondence between the individual linguistic units. In this paper, we enable character-level NMT for Chinese, by breaking down Chinese characters into linguistic units similar to that of Indo-European languages. We use the Wubi encoding scheme, which preserves the original shape and semantic information of the characters, while also being reversible. We show promising results from training Wubi-based models on the character- and subword-level with recurrent as well as convolutional models."]], "1_corpus": "arxiv"}
18
  {"tstamp": 1722445796.0975, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "483dafda9ed54c9c80bc45216ad796db", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "what type of data is provided in a knowledge base", "0_output": [["what type of data is provided in a knowledge base", "W. Edwards Deming\n\nDeming continued to advise businesses large and small. From 1985 through 1989, Deming served as a consultant to Vernay Laboratories, a rubber manufacturing firm in Yellow Springs, Ohio, with fewer than 1,000 employees. He held several week-long seminars for employees and suppliers of the small company where his famous example \"Workers on the Red Beads\" spurred several major changes in Vernay's manufacturing processes.\nDeming joined the Graduate School of Business at Columbia University in 1988. In 1990, during his last year, he founded the W. Edwards Deming Center for Quality, Productivity, and Competitiveness at Columbia Business School to promote operational excellence in business through the development of research, best practices and strategic planning.\nIn 1990, Marshall Industries (NYSE:MI, 1984–1999) CEO Robert Rodin trained with the then 90-year-old Deming and his colleague Nida Backaitis. Marshall Industries' dramatic transformation and growth from $400 million to $1.8 billion in sales was chronicled in Deming's last book The New Economics, a Harvard Case Study, and Rodin's book, Free, Perfect and Now.\nIn 1993, Deming published his final book, The New Economics for Industry, Government, Education, which included the System of Profound Knowledge and the 14 Points for Management. It also contained educational concepts involving group-based teaching without grades, as well as management without individual merit or performance reviews.\nAcademic contributions\nThe philosophy of W. Edwards Deming has been summarized as follows:\nThe Deming System of Profound Knowledge"]], "0_corpus": "wikipedia", "1_conv_id": "3139be50f5064e0894343f6a22890c54", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "what type of data is provided in a knowledge base", "1_output": [["what type of data is provided in a knowledge base", "Knowledge base\n\nIn computer science, a knowledge base (KB) is a set of sentences, each sentence given in a knowledge representation language, with interfaces to tell new sentences and to ask questions about what is known, where either of these interfaces might use inference. It is a technology used to store complex structured data used by a computer system. The initial use of the term was in connection with expert systems, which were the first knowledge-based systems.\nOriginal usage of the term"]], "1_corpus": "wikipedia"}
19
  {"tstamp": 1722445834.473, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "ed8435cf63824a24a3b2372e6608f29d", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "how to change share name in windows server 2008?", "0_output": [["how to change share name in windows server 2008?", "Superuser Stackexchange\n\nQ: How to change the administrator account name from command prompt in Windows Server 2008 Server Core? I'm trying to change the administrator account name on my virtual machine for lab work purposes. I'm running Windows Server 2008 Server Core, therefore I'm only using the standard command prompts.\nHow can I do this?\n\nA: If it domain user install Active Directory Administration module for PowerShell. \ncommand line:\npowershell Import-module ActiveDirectory\n\nrename domain user -\nPowershell:\nGet-ADUser -Identity 'Administrator' | Rename-ADObject -NewName 'Partisan'\n\ncommand line:\npowershell Get-ADUser -Identity 'Administrator' ^| Rename-ADObject -NewName 'Partisan'\n\nvar 2:\ndsquery user -name Administrator | dsmove -newname \"Partisan\"\n\nlocal administrator -\nPowershell:\nRename-LocalUser -UserName 'Administrator' -NewUserName 'Partisan'\n\ncommand line:\npowershell Rename-LocalUser -UserName 'Administrator' -NewUserName 'Partisan'\n\nvar2:\nwmic UserAccount where Name=\"Administrator\" call Rename Name=\"Partisan\"\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "939f7aa837ce48d9b9700ee6346622d4", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "how to change share name in windows server 2008?", "1_output": [["how to change share name in windows server 2008?", "Serverfault Stackexchange\n\nQ: How do you configure a \\\\domain\\sharedFolder in windows 2008? Background: \n\n\n*\n\n*I've got several Windows 2008 servers setup by someone else.\n\n*They are all on one domain, lets call this mydomain\n\n*Several boxes are accessing a shared folder by going to \\\\mydomain\\share1\n\n*I found the actual files backing the share on box1\n\n*I need to move the location of this share to a larger drive for space reasons\n\n\nI'm familiar with setting up and managing shares like \\\\box1\\share1, but where do you manage/configure shares that are domain-wide; accessed like \\\\mydomain\\share1 ?\n\nA: It sounds like you have a domain based DFS configured, which is managed by the DFS Management utility.\n"]], "1_corpus": "stackexchange"}
20
+ {"tstamp": 1722446198.8045, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9e2f3e5d4eb04b5db030979a91bfe01e", "0_model_name": "text-embedding-3-large", "0_prompt": "How can a small visual-linguistic model be effectively trained using knowledge distillation?", "0_output": [["How can a small visual-linguistic model be effectively trained using knowledge distillation?", "Title: Compressing Visual-linguistic Model via Knowledge Distillation\n\nAbstract: Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively compress a transformer-based large VL model into a small VL model. The major challenge arises from the inconsistent regional visual tokens extracted from different detectors of Teacher and Student, resulting in the misalignment of hidden representations and attention distributions. To address the problem, we retrain and adapt the Teacher by using the same region proposals from Student's detector while the features are from Teacher's own object detector. With aligned network inputs, the adapted Teacher is capable of transferring the knowledge through the intermediate representations. Specifically, we use the mean square error loss to mimic the attention distribution inside the transformer block and present a token-wise noise contrastive loss to align the hidden state by contrasting with negative representations stored in a sample queue. To this end, we show that our proposed distillation significantly improves the performance of small VL models on image captioning and visual question answering tasks. It reaches 120.8 in CIDEr score on COCO captioning, an improvement of 5.1 over its non-distilled counterpart; and an accuracy of 69.8 on VQA 2.0, a 0.8 gain from the baseline. Our extensive experiments and ablations confirm the effectiveness of VL distillation in both pre-training and fine-tuning stages."]], "0_corpus": "arxiv", "1_conv_id": "99d70563aab749489c924deb1dc45544", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "How can a small visual-linguistic model be effectively trained using knowledge distillation?", "1_output": [["How can a small visual-linguistic model be effectively trained using knowledge distillation?", "Title: Compressing Visual-linguistic Model via Knowledge Distillation\n\nAbstract: Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively compress a transformer-based large VL model into a small VL model. The major challenge arises from the inconsistent regional visual tokens extracted from different detectors of Teacher and Student, resulting in the misalignment of hidden representations and attention distributions. To address the problem, we retrain and adapt the Teacher by using the same region proposals from Student's detector while the features are from Teacher's own object detector. With aligned network inputs, the adapted Teacher is capable of transferring the knowledge through the intermediate representations. Specifically, we use the mean square error loss to mimic the attention distribution inside the transformer block and present a token-wise noise contrastive loss to align the hidden state by contrasting with negative representations stored in a sample queue. To this end, we show that our proposed distillation significantly improves the performance of small VL models on image captioning and visual question answering tasks. It reaches 120.8 in CIDEr score on COCO captioning, an improvement of 5.1 over its non-distilled counterpart; and an accuracy of 69.8 on VQA 2.0, a 0.8 gain from the baseline. Our extensive experiments and ablations confirm the effectiveness of VL distillation in both pre-training and fine-tuning stages."]], "1_corpus": "arxiv"}
data/retrieval_individual-407c4836-37e2-4f9f-8e9b-06706cc4440c.jsonl CHANGED
@@ -48,3 +48,9 @@
48
  {"tstamp": 1722445941.847, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722445941.6259, "finish": 1722445941.847, "ip": "", "conv_id": "f3b21c3ce29d447f9701d499a4a05d8f", "model_name": "GritLM/GritLM-7B", "prompt": "who was the original host of what's my line", "output": [["who was the original host of what's my line", "What's My Line?\n\nMost notably, the mystery guests ran the gamut of show business' most legendary icons, including Jack Benny, John Wayne, Gary Cooper, Elizabeth Taylor, Ella Fitzgerald, Ava Gardner, Joan Crawford, James Cagney, Julie Andrews, Jack Lemmon, Jackie Gleason, Steve McQueen, Alfred Hitchcock, Doris Day, James Garner, Ronald Reagan, Jane Russell, Rosalind Russell, Fred Astaire, George Raft, Edward G. Robinson, Gene Autry, Lena Horne, Paul Newman and Joanne Woodward, Andy Griffith, Harry Belafonte, Ginger Rogers, Roy Rogers, Lucille Ball, Desi Arnaz, Bob Hope, Frank Sinatra, Dean Martin and Jerry Lewis, Sammy Davis Jr., Peter Lawford, Sean Connery, James Stewart, Henry Fonda, Salvador Dalí, Aretha Franklin, and many more.\nStyle\nWhat's My Line? is known for its attention to manners and class. In its early years, business suits and street dresses were worn by the host and panelists, but by 1953, the men normally wore black suits with bow tie (a few guests in fact wore black tie) while female panelists donned formal gown and often gloves. Exceptions to this dress code were on the broadcasts immediately following the deaths of Fred Allen and Dorothy Kilgallen, in which the male cast members wore straight neckties and the women wore simpler dresses.\nThe game followed a line of formality and adherence to rules. Although using first names at other points, Daly usually addressed using surnames when passing the questioning to a particular panelist. He would also amiably chide the panel if they began a conference without first asking him."]], "corpus": "wikipedia"}
49
  {"tstamp": 1722446192.9653, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722446192.4398, "finish": 1722446192.9653, "ip": "", "conv_id": "9e2f3e5d4eb04b5db030979a91bfe01e", "model_name": "text-embedding-3-large", "prompt": "How can a small visual-linguistic model be effectively trained using knowledge distillation?", "output": [["How can a small visual-linguistic model be effectively trained using knowledge distillation?", "Title: Compressing Visual-linguistic Model via Knowledge Distillation\n\nAbstract: Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively compress a transformer-based large VL model into a small VL model. The major challenge arises from the inconsistent regional visual tokens extracted from different detectors of Teacher and Student, resulting in the misalignment of hidden representations and attention distributions. To address the problem, we retrain and adapt the Teacher by using the same region proposals from Student's detector while the features are from Teacher's own object detector. With aligned network inputs, the adapted Teacher is capable of transferring the knowledge through the intermediate representations. Specifically, we use the mean square error loss to mimic the attention distribution inside the transformer block and present a token-wise noise contrastive loss to align the hidden state by contrasting with negative representations stored in a sample queue. To this end, we show that our proposed distillation significantly improves the performance of small VL models on image captioning and visual question answering tasks. It reaches 120.8 in CIDEr score on COCO captioning, an improvement of 5.1 over its non-distilled counterpart; and an accuracy of 69.8 on VQA 2.0, a 0.8 gain from the baseline. Our extensive experiments and ablations confirm the effectiveness of VL distillation in both pre-training and fine-tuning stages."]], "corpus": "arxiv"}
50
  {"tstamp": 1722446192.9653, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722446192.4398, "finish": 1722446192.9653, "ip": "", "conv_id": "99d70563aab749489c924deb1dc45544", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "How can a small visual-linguistic model be effectively trained using knowledge distillation?", "output": [["How can a small visual-linguistic model be effectively trained using knowledge distillation?", "Title: Compressing Visual-linguistic Model via Knowledge Distillation\n\nAbstract: Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively compress a transformer-based large VL model into a small VL model. The major challenge arises from the inconsistent regional visual tokens extracted from different detectors of Teacher and Student, resulting in the misalignment of hidden representations and attention distributions. To address the problem, we retrain and adapt the Teacher by using the same region proposals from Student's detector while the features are from Teacher's own object detector. With aligned network inputs, the adapted Teacher is capable of transferring the knowledge through the intermediate representations. Specifically, we use the mean square error loss to mimic the attention distribution inside the transformer block and present a token-wise noise contrastive loss to align the hidden state by contrasting with negative representations stored in a sample queue. To this end, we show that our proposed distillation significantly improves the performance of small VL models on image captioning and visual question answering tasks. It reaches 120.8 in CIDEr score on COCO captioning, an improvement of 5.1 over its non-distilled counterpart; and an accuracy of 69.8 on VQA 2.0, a 0.8 gain from the baseline. Our extensive experiments and ablations confirm the effectiveness of VL distillation in both pre-training and fine-tuning stages."]], "corpus": "arxiv"}
 
 
 
 
 
 
 
48
  {"tstamp": 1722445941.847, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722445941.6259, "finish": 1722445941.847, "ip": "", "conv_id": "f3b21c3ce29d447f9701d499a4a05d8f", "model_name": "GritLM/GritLM-7B", "prompt": "who was the original host of what's my line", "output": [["who was the original host of what's my line", "What's My Line?\n\nMost notably, the mystery guests ran the gamut of show business' most legendary icons, including Jack Benny, John Wayne, Gary Cooper, Elizabeth Taylor, Ella Fitzgerald, Ava Gardner, Joan Crawford, James Cagney, Julie Andrews, Jack Lemmon, Jackie Gleason, Steve McQueen, Alfred Hitchcock, Doris Day, James Garner, Ronald Reagan, Jane Russell, Rosalind Russell, Fred Astaire, George Raft, Edward G. Robinson, Gene Autry, Lena Horne, Paul Newman and Joanne Woodward, Andy Griffith, Harry Belafonte, Ginger Rogers, Roy Rogers, Lucille Ball, Desi Arnaz, Bob Hope, Frank Sinatra, Dean Martin and Jerry Lewis, Sammy Davis Jr., Peter Lawford, Sean Connery, James Stewart, Henry Fonda, Salvador Dalí, Aretha Franklin, and many more.\nStyle\nWhat's My Line? is known for its attention to manners and class. In its early years, business suits and street dresses were worn by the host and panelists, but by 1953, the men normally wore black suits with bow tie (a few guests in fact wore black tie) while female panelists donned formal gown and often gloves. Exceptions to this dress code were on the broadcasts immediately following the deaths of Fred Allen and Dorothy Kilgallen, in which the male cast members wore straight neckties and the women wore simpler dresses.\nThe game followed a line of formality and adherence to rules. Although using first names at other points, Daly usually addressed using surnames when passing the questioning to a particular panelist. He would also amiably chide the panel if they began a conference without first asking him."]], "corpus": "wikipedia"}
49
  {"tstamp": 1722446192.9653, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722446192.4398, "finish": 1722446192.9653, "ip": "", "conv_id": "9e2f3e5d4eb04b5db030979a91bfe01e", "model_name": "text-embedding-3-large", "prompt": "How can a small visual-linguistic model be effectively trained using knowledge distillation?", "output": [["How can a small visual-linguistic model be effectively trained using knowledge distillation?", "Title: Compressing Visual-linguistic Model via Knowledge Distillation\n\nAbstract: Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively compress a transformer-based large VL model into a small VL model. The major challenge arises from the inconsistent regional visual tokens extracted from different detectors of Teacher and Student, resulting in the misalignment of hidden representations and attention distributions. To address the problem, we retrain and adapt the Teacher by using the same region proposals from Student's detector while the features are from Teacher's own object detector. With aligned network inputs, the adapted Teacher is capable of transferring the knowledge through the intermediate representations. Specifically, we use the mean square error loss to mimic the attention distribution inside the transformer block and present a token-wise noise contrastive loss to align the hidden state by contrasting with negative representations stored in a sample queue. To this end, we show that our proposed distillation significantly improves the performance of small VL models on image captioning and visual question answering tasks. It reaches 120.8 in CIDEr score on COCO captioning, an improvement of 5.1 over its non-distilled counterpart; and an accuracy of 69.8 on VQA 2.0, a 0.8 gain from the baseline. Our extensive experiments and ablations confirm the effectiveness of VL distillation in both pre-training and fine-tuning stages."]], "corpus": "arxiv"}
50
  {"tstamp": 1722446192.9653, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722446192.4398, "finish": 1722446192.9653, "ip": "", "conv_id": "99d70563aab749489c924deb1dc45544", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "How can a small visual-linguistic model be effectively trained using knowledge distillation?", "output": [["How can a small visual-linguistic model be effectively trained using knowledge distillation?", "Title: Compressing Visual-linguistic Model via Knowledge Distillation\n\nAbstract: Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively compress a transformer-based large VL model into a small VL model. The major challenge arises from the inconsistent regional visual tokens extracted from different detectors of Teacher and Student, resulting in the misalignment of hidden representations and attention distributions. To address the problem, we retrain and adapt the Teacher by using the same region proposals from Student's detector while the features are from Teacher's own object detector. With aligned network inputs, the adapted Teacher is capable of transferring the knowledge through the intermediate representations. Specifically, we use the mean square error loss to mimic the attention distribution inside the transformer block and present a token-wise noise contrastive loss to align the hidden state by contrasting with negative representations stored in a sample queue. To this end, we show that our proposed distillation significantly improves the performance of small VL models on image captioning and visual question answering tasks. It reaches 120.8 in CIDEr score on COCO captioning, an improvement of 5.1 over its non-distilled counterpart; and an accuracy of 69.8 on VQA 2.0, a 0.8 gain from the baseline. Our extensive experiments and ablations confirm the effectiveness of VL distillation in both pre-training and fine-tuning stages."]], "corpus": "arxiv"}
51
+ {"tstamp": 1722446212.8405, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722446212.6228, "finish": 1722446212.8405, "ip": "", "conv_id": "1162357ab68643c9b54af421206dd72f", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "do dothraki scenes have subtitles?", "output": [["do dothraki scenes have subtitles?", "Gaming Stackexchange\n\nQ: Are subtitles available in Dragon Age Origins? While I possess a wonderful pair of speakers, I find that with their period accent, some of the NPCs in Dragon Age are woefully misunderstood (by me). I know that there are some captions in this game already, as the last line of dialog will appear at the top of the screen if you wait long enough when presented with dialog options, but I'm wondering if there is a setting somewhere that subtitles all spoken dialog.\nI've looked under the in-game menu, and can't seem to find an option for it. Am I looking in the wrong place, or simply looking fruitlessly?\n\nA: It's directly on the game tab of the options dialog, for some ineffable reason in the Feedback area.\n\n"]], "corpus": "stackexchange"}
52
+ {"tstamp": 1722446212.8405, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722446212.6228, "finish": 1722446212.8405, "ip": "", "conv_id": "4308e1ea2fc044a7ae3dc67fe5770f6e", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "do dothraki scenes have subtitles?", "output": [["do dothraki scenes have subtitles?", "Gaming Stackexchange\n\nQ: Are subtitles available in Dragon Age Origins? While I possess a wonderful pair of speakers, I find that with their period accent, some of the NPCs in Dragon Age are woefully misunderstood (by me). I know that there are some captions in this game already, as the last line of dialog will appear at the top of the screen if you wait long enough when presented with dialog options, but I'm wondering if there is a setting somewhere that subtitles all spoken dialog.\nI've looked under the in-game menu, and can't seem to find an option for it. Am I looking in the wrong place, or simply looking fruitlessly?\n\nA: It's directly on the game tab of the options dialog, for some ineffable reason in the Feedback area.\n\n"]], "corpus": "stackexchange"}
53
+ {"tstamp": 1722446232.0034, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722446231.7002, "finish": 1722446232.0034, "ip": "", "conv_id": "a83e9f75619944e19b431551f3b8d60c", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Hvad er en normtid", "output": [["Hvad er en normtid", "Pehr Evind Svinhufvud\n\nPehr Evind Svinhufvud af Qvalstad (; ; 15 December 1861 – 29 February 1944) was the third president of Finland from 1931 to 1937. Serving as a lawyer, judge, and politician in the Grand Duchy of Finland, which was at that time an autonomous state under the Russian Empire’s rule, Svinhufvud played a major role in the movement for Finnish independence. He was the one who presented the Declaration of Independence to the Parliament."]], "corpus": "wikipedia"}
54
+ {"tstamp": 1722446232.0034, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722446231.7002, "finish": 1722446232.0034, "ip": "", "conv_id": "43255f0f01d84ab2ad728e93ed4f8906", "model_name": "embed-english-v3.0", "prompt": "Hvad er en normtid", "output": [["Hvad er en normtid", "Normativity\n\nNormative generally means relating to an evaluative standard. Normativity is the phenomenon in human societies of designating some actions or outcomes as good, desirable, or permissible, and others as bad, undesirable, or impermissible. A norm in this sense means a standard for evaluating or making judgments about behavior or outcomes. \"Normative\" is sometimes also used, somewhat confusingly, to mean relating to a descriptive standard: doing what is normally done or what most others are expected to do in practice. In this sense a norm is not evaluative, a basis for judging behavior or outcomes; it is simply a fact or observation about behavior or outcomes, without judgment. Many researchers in science, law, and philosophy try to restrict the use of the term \"normative\" to the evaluative sense and refer to the description of behavior and outcomes as positive, descriptive, predictive, or empirical.\nNormative has specialised meanings in different academic disciplines such as philosophy, social sciences, and law. In most contexts, normative means 'relating to an evaluation or value judgment.' Normative propositions tend to evaluate some object or some course of action. Normative content differs from descriptive content.\nThough philosophers disagree about how normativity should be understood; it has become increasingly common to understand normative claims as claims about reasons. As Derek Parfit explains:"]], "corpus": "wikipedia"}
55
+ {"tstamp": 1722446289.5149, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722446289.3077, "finish": 1722446289.5149, "ip": "", "conv_id": "a60921de87f34a499dac2cf89e0be350", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "What's the purpose of `if __name__ == '__main__'`?", "output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What's the point of a main function and/or __name__ == \"__main__\" check in Python? I occasionally notice something like the following in Python scripts:\nif __name__ == \"__main__\":\n # do stuff like call main()\n\nWhat's the point of this?\n\nA: This allows a python script to be imported or run standalone is a sane way.\nIf you run a python file directly, the __name__ variable will contain __main__. If you import the script that will not be the case. Normally, if you import the script you want to call functions or reference classes from the file.\nIf you did not have this check, any code that was not in a class or function would run when you import.\n"]], "corpus": "stackexchange"}
56
+ {"tstamp": 1722446289.5149, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722446289.3077, "finish": 1722446289.5149, "ip": "", "conv_id": "0923dcd96155417db71132f430c88c5b", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "What's the purpose of `if __name__ == '__main__'`?", "output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "corpus": "stackexchange"}