Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
353b394
·
verified ·
1 Parent(s): 9b10b5c

Scheduled Commit

Browse files
data/clustering_individual-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {"tstamp": 1723244247.2146, "task_type": "clustering", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1723244236.9074, "finish": 1723244247.2146, "ip": "", "conv_id": "af74077e052b4fbbaec6ae005009a3cb", "model_name": "embed-english-v3.0", "prompt": ["backpack", "flashlight", "water filter", "compass", "life", "disability", "auto", "health", "pet", "elm", "pine", "oak", "willow", "maple", "birch", "cedar"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
2
+ {"tstamp": 1723244247.2146, "task_type": "clustering", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1723244236.9074, "finish": 1723244247.2146, "ip": "", "conv_id": "7e80a1585a50422f9bc6d24c888fa820", "model_name": "text-embedding-004", "prompt": ["backpack", "flashlight", "water filter", "compass", "life", "disability", "auto", "health", "pet", "elm", "pine", "oak", "willow", "maple", "birch", "cedar"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
3
+ {"tstamp": 1723244274.7176, "task_type": "clustering", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723244274.5747, "finish": 1723244274.7176, "ip": "", "conv_id": "8ee353339b874131883f4e758c4fb6c2", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": ["Apple", "Samsung", "Huawei", "LG", "OnePlus", "Xiaomi", "wisdom tooth", "molar", "incisor", "premolar", "canine", "Opera", "Safari", "Brave", "Edge", "Firefox", "Chrome", "Capricorn", "Leo", "Taurus", "Aries", "Scorpio", "Libra", "Cancer", "Gemini", "canoe", "motorboat", "yacht", "catamaran"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
4
+ {"tstamp": 1723244274.7176, "task_type": "clustering", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723244274.5747, "finish": 1723244274.7176, "ip": "", "conv_id": "dca734cd08a946dc991a76cbd4459fd7", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": ["Apple", "Samsung", "Huawei", "LG", "OnePlus", "Xiaomi", "wisdom tooth", "molar", "incisor", "premolar", "canine", "Opera", "Safari", "Brave", "Edge", "Firefox", "Chrome", "Capricorn", "Leo", "Taurus", "Aries", "Scorpio", "Libra", "Cancer", "Gemini", "canoe", "motorboat", "yacht", "catamaran"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
data/retrieval_battle-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -30,3 +30,5 @@
30
  {"tstamp": 1723238412.8572, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "dcfbf1652d544cb1bcd44d58864ed379", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "What is functionalism?", "0_output": [["What is functionalism?", "Functionalism (philosophy of mind)\n\nIn the philosophy of mind, functionalism is the thesis that each and every mental state (for example, the state of having a belief, of having a desire, or of being in pain) is constituted solely by its functional role, which means its causal relation to other mental states, sensory inputs, and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.\nFunctionalism is a theoretical level between the physical implementation and behavioral output. Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its \"software programs\"."]], "0_corpus": "wikipedia", "1_conv_id": "79dc226c4cf84bb08fefea4e74852796", "1_model_name": "text-embedding-3-large", "1_prompt": "What is functionalism?", "1_output": [["What is functionalism?", "Structural functionalism\n\nStructural functionalism, or simply functionalism, is \"a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stability\".\nThis approach looks at society through a macro-level orientation, which is a broad focus on the social structures that shape society as a whole, and believes that society has evolved like organisms. This approach looks at both social structure and social functions. Functionalism addresses society as a whole in terms of the function of its constituent elements; namely norms, customs, traditions, and institutions.\nA common analogy called the organic or biological analogy, popularized by Herbert Spencer, presents these parts of society as human body \"organs\" that work toward the proper functioning of the \"body\" as a whole. In the most basic terms, it simply emphasizes \"the effort to impute, as rigorously as possible, to each feature, custom, or practice, its effect on the functioning of a supposedly stable, cohesive system\". For Talcott Parsons, \"structural-functionalism\" came to describe a particular stage in the methodological development of social science, rather than a specific school of thought.\nTheory\nIn sociology, classical theories are defined by a tendency towards biological analogy and notions of social evolutionism:"]], "1_corpus": "wikipedia"}
31
  {"tstamp": 1723239806.3829, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "af65f0c8f418471790a1c43b9ad94ddd", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "شراء اونلاين", "0_output": [["شراء اونلاين", "Online shopping\n\nOnline shopping is a form of electronic commerce which allows consumers to directly buy goods or services from a seller over the Internet using a web browser or a mobile app. Consumers find a product of interest by visiting the website of the retailer directly or by searching among alternative vendors using a shopping search engine, which displays the same product's availability and pricing at different e-retailers. customers can shop online using a range of different computers and devices, including desktop computers, laptops, tablet computers and smartphones."]], "0_corpus": "wikipedia", "1_conv_id": "786c447dd7b243be974adea03af0bce5", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "شراء اونلاين", "1_output": [["شراء اونلاين", "Islamic feminism\n\nMusawah In 2009, twelve women from the Arab world formed the global movement Musawah, whose name means \"equality\" in Arabic. Musawah advocates for feminist interpretations of Islamic texts and calls on nations to abide by international human rights standards such as those promulgated in the Convention on the Elimination of All Forms of Discrimination Against Women. Musawah's approach is modeled after that of Sisters in Islam. Secular feminists have criticized Musawah, arguing that Islam is shaky ground on which to build a feminist movement, given that interpretation of Islamic sources is subjective.\nSister-hood\nSister-hood is an international platform for the voices of women of Muslim heritage founded in 2007 by Norwegian film-maker and human rights activist Deeyah Khan through her media and arts production company Fuuse.\nSister-hood was relaunched in 2016 as a global online magazine and live events platform promoting the voices of women of Muslim heritage.\nSister-hood magazine ambassadors include Farida Shaheed from Pakistan, Egyptian Mona Eltahawy, Palestinian Rula Jebreal, Leyla Hussein of Somali heritage and Algerian Marieme Helie Lucas.\nWomen Living Under Muslim Laws (WLUML)\nWomen Living Under Muslim Laws is an international solidarity network established in 1984 that advocates for both Muslim and non-Muslim women who live in states governed by Islamic law. The group does research on Islamic law and women and advocacy work.\nMuslim Women's Quest for Equality\nMuslim Women's Quest for Equality is an Indian activist group that petitioned the Supreme Court of India against the practices of talaq-e-bidat (triple talaq), nikah halala and polygyny under the Muslim personal laws as being illegal and unconstitutional in September 2016."]], "1_corpus": "wikipedia"}
32
  {"tstamp": 1723244004.6623, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "45909c127d814bd0ac23db607124814e", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "A real-time panoptic segmentation model achieving near SoTA performance.", "0_output": [["A real-time panoptic segmentation model achieving near SoTA performance.", "Title: Real-Time Panoptic Segmentation from Dense Detections\n\nAbstract: Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution. Current state-of-the-art approaches cannot run in real-time, and simplifying these architectures to improve efficiency severely degrades their accuracy. In this paper, we propose a new single-shot panoptic segmentation network that leverages dense detections and a global self-attention mechanism to operate in real-time with performance approaching the state of the art. We introduce a novel parameter-free mask construction method that substantially reduces computational complexity by efficiently reusing information from the object detection and semantic segmentation sub-tasks. The resulting network has a simple data flow that does not require feature map re-sampling or clustering post-processing, enabling significant hardware acceleration. Our experiments on the Cityscapes and COCO benchmarks show that our network works at 30 FPS on 1024x2048 resolution, trading a 3% relative performance degradation from the current state of the art for up to 440% faster inference."]], "0_corpus": "arxiv", "1_conv_id": "4e3799095d84441b9e8c80514f764b7c", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "A real-time panoptic segmentation model achieving near SoTA performance.", "1_output": [["A real-time panoptic segmentation model achieving near SoTA performance.", "Title: Real-Time Panoptic Segmentation from Dense Detections\n\nAbstract: Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution. Current state-of-the-art approaches cannot run in real-time, and simplifying these architectures to improve efficiency severely degrades their accuracy. In this paper, we propose a new single-shot panoptic segmentation network that leverages dense detections and a global self-attention mechanism to operate in real-time with performance approaching the state of the art. We introduce a novel parameter-free mask construction method that substantially reduces computational complexity by efficiently reusing information from the object detection and semantic segmentation sub-tasks. The resulting network has a simple data flow that does not require feature map re-sampling or clustering post-processing, enabling significant hardware acceleration. Our experiments on the Cityscapes and COCO benchmarks show that our network works at 30 FPS on 1024x2048 resolution, trading a 3% relative performance degradation from the current state of the art for up to 440% faster inference."]], "1_corpus": "arxiv"}
 
 
 
30
  {"tstamp": 1723238412.8572, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "dcfbf1652d544cb1bcd44d58864ed379", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "What is functionalism?", "0_output": [["What is functionalism?", "Functionalism (philosophy of mind)\n\nIn the philosophy of mind, functionalism is the thesis that each and every mental state (for example, the state of having a belief, of having a desire, or of being in pain) is constituted solely by its functional role, which means its causal relation to other mental states, sensory inputs, and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.\nFunctionalism is a theoretical level between the physical implementation and behavioral output. Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its \"software programs\"."]], "0_corpus": "wikipedia", "1_conv_id": "79dc226c4cf84bb08fefea4e74852796", "1_model_name": "text-embedding-3-large", "1_prompt": "What is functionalism?", "1_output": [["What is functionalism?", "Structural functionalism\n\nStructural functionalism, or simply functionalism, is \"a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stability\".\nThis approach looks at society through a macro-level orientation, which is a broad focus on the social structures that shape society as a whole, and believes that society has evolved like organisms. This approach looks at both social structure and social functions. Functionalism addresses society as a whole in terms of the function of its constituent elements; namely norms, customs, traditions, and institutions.\nA common analogy called the organic or biological analogy, popularized by Herbert Spencer, presents these parts of society as human body \"organs\" that work toward the proper functioning of the \"body\" as a whole. In the most basic terms, it simply emphasizes \"the effort to impute, as rigorously as possible, to each feature, custom, or practice, its effect on the functioning of a supposedly stable, cohesive system\". For Talcott Parsons, \"structural-functionalism\" came to describe a particular stage in the methodological development of social science, rather than a specific school of thought.\nTheory\nIn sociology, classical theories are defined by a tendency towards biological analogy and notions of social evolutionism:"]], "1_corpus": "wikipedia"}
31
  {"tstamp": 1723239806.3829, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "af65f0c8f418471790a1c43b9ad94ddd", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "شراء اونلاين", "0_output": [["شراء اونلاين", "Online shopping\n\nOnline shopping is a form of electronic commerce which allows consumers to directly buy goods or services from a seller over the Internet using a web browser or a mobile app. Consumers find a product of interest by visiting the website of the retailer directly or by searching among alternative vendors using a shopping search engine, which displays the same product's availability and pricing at different e-retailers. customers can shop online using a range of different computers and devices, including desktop computers, laptops, tablet computers and smartphones."]], "0_corpus": "wikipedia", "1_conv_id": "786c447dd7b243be974adea03af0bce5", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "شراء اونلاين", "1_output": [["شراء اونلاين", "Islamic feminism\n\nMusawah In 2009, twelve women from the Arab world formed the global movement Musawah, whose name means \"equality\" in Arabic. Musawah advocates for feminist interpretations of Islamic texts and calls on nations to abide by international human rights standards such as those promulgated in the Convention on the Elimination of All Forms of Discrimination Against Women. Musawah's approach is modeled after that of Sisters in Islam. Secular feminists have criticized Musawah, arguing that Islam is shaky ground on which to build a feminist movement, given that interpretation of Islamic sources is subjective.\nSister-hood\nSister-hood is an international platform for the voices of women of Muslim heritage founded in 2007 by Norwegian film-maker and human rights activist Deeyah Khan through her media and arts production company Fuuse.\nSister-hood was relaunched in 2016 as a global online magazine and live events platform promoting the voices of women of Muslim heritage.\nSister-hood magazine ambassadors include Farida Shaheed from Pakistan, Egyptian Mona Eltahawy, Palestinian Rula Jebreal, Leyla Hussein of Somali heritage and Algerian Marieme Helie Lucas.\nWomen Living Under Muslim Laws (WLUML)\nWomen Living Under Muslim Laws is an international solidarity network established in 1984 that advocates for both Muslim and non-Muslim women who live in states governed by Islamic law. The group does research on Islamic law and women and advocacy work.\nMuslim Women's Quest for Equality\nMuslim Women's Quest for Equality is an Indian activist group that petitioned the Supreme Court of India against the practices of talaq-e-bidat (triple talaq), nikah halala and polygyny under the Muslim personal laws as being illegal and unconstitutional in September 2016."]], "1_corpus": "wikipedia"}
32
  {"tstamp": 1723244004.6623, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "45909c127d814bd0ac23db607124814e", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "A real-time panoptic segmentation model achieving near SoTA performance.", "0_output": [["A real-time panoptic segmentation model achieving near SoTA performance.", "Title: Real-Time Panoptic Segmentation from Dense Detections\n\nAbstract: Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution. Current state-of-the-art approaches cannot run in real-time, and simplifying these architectures to improve efficiency severely degrades their accuracy. In this paper, we propose a new single-shot panoptic segmentation network that leverages dense detections and a global self-attention mechanism to operate in real-time with performance approaching the state of the art. We introduce a novel parameter-free mask construction method that substantially reduces computational complexity by efficiently reusing information from the object detection and semantic segmentation sub-tasks. The resulting network has a simple data flow that does not require feature map re-sampling or clustering post-processing, enabling significant hardware acceleration. Our experiments on the Cityscapes and COCO benchmarks show that our network works at 30 FPS on 1024x2048 resolution, trading a 3% relative performance degradation from the current state of the art for up to 440% faster inference."]], "0_corpus": "arxiv", "1_conv_id": "4e3799095d84441b9e8c80514f764b7c", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "A real-time panoptic segmentation model achieving near SoTA performance.", "1_output": [["A real-time panoptic segmentation model achieving near SoTA performance.", "Title: Real-Time Panoptic Segmentation from Dense Detections\n\nAbstract: Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution. Current state-of-the-art approaches cannot run in real-time, and simplifying these architectures to improve efficiency severely degrades their accuracy. In this paper, we propose a new single-shot panoptic segmentation network that leverages dense detections and a global self-attention mechanism to operate in real-time with performance approaching the state of the art. We introduce a novel parameter-free mask construction method that substantially reduces computational complexity by efficiently reusing information from the object detection and semantic segmentation sub-tasks. The resulting network has a simple data flow that does not require feature map re-sampling or clustering post-processing, enabling significant hardware acceleration. Our experiments on the Cityscapes and COCO benchmarks show that our network works at 30 FPS on 1024x2048 resolution, trading a 3% relative performance degradation from the current state of the art for up to 440% faster inference."]], "1_corpus": "arxiv"}
33
+ {"tstamp": 1723244179.5866, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "f095c576fd2240f3a515c5e1e7cdf7ef", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Why did the machines in The Matrix keep humans around?", "0_output": [["Why did the machines in The Matrix keep humans around?", "Stackoverflow Stackexchange\n\nQ: Can't install gatsby-plugin-sharp.(libpng-dev may not installed) OS: Win 10\nwindows-build-tools is installed.\nWhen I was trying to install gatsby-plugin-sharp with npm install --save gatsby-plugin-sharp. I received two errors.\nI've tried this npm install --save-dev pngquant --msvs_version=2013 as someone mentioned. But it caused the same error with no 'libpng-dev'.\n\\> [email protected] postinstall C:\\Users\\54391\\Documents\\code\\personal-blog\\node_modules\\mozjpeg\n\\> node lib/install.js\n\n‼ getaddrinfo ENOENT raw.githubusercontent.com\n‼ mozjpeg pre-build test failed\ni compiling from source\n× Error: Command failed: C:\\Windows\\system32\\cmd.exe /s /c \"autoreconf -fiv\"\n'autoreconf' �����ڲ����ⲿ���Ҳ���ǿ����еij���\n���������ļ���\n\nat C:\\Users\\54391\\Documents\\code\\personal-blog\\node_modules\\bin-build\\node_modules\\execa\\index.js:231:11\nat runMicrotasks (<anonymous>)\nat processTicksAndRejections (internal/process/task_queues.js:97:5)\nat async Promise.all (index 0)\n\n\\> [email protected] postinstall C:\\Users\\54391\\Documents\\code\\personal-blog\\node_modules\\pngquant-bin\n\\> node lib/install.js\n\n‼ getaddrinfo ENOENT raw.githubusercontent.com\n‼ pngquant pre-build test failed\ni compiling from source\n× Error: pngquant failed to build, make sure that libpng-dev is installed\nat C:\\Users\\54391\\Documents\\code\\personal-blog\\node_modules\\bin-build\\node_modules\\execa\\index.js:231:11\nat processTicksAndRejections (internal/process/task_queues.js:97:5)\nat async Promise.all (index 0)\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "85875ef025614bdcba923a3349cbb1ac", "1_model_name": "voyage-multilingual-2", "1_prompt": "Why did the machines in The Matrix keep humans around?", "1_output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why do the machines allow/initiate the building of a new Zion? In the Matrix Reloaded, the Architect says:\n\nThe function of the One is now to return to the Source, allowing a temporary dissemination of the code you carry, reinserting the prime program. After which, you will be required to select from the Matrix 23 individuals - 16 female, 7 male - to rebuild Zion. Failure to comply with this process will result in a cataclysmic system crash, killing everyone connected to the Matrix, which, coupled with the extermination of Zion, will ultimately result in the extinction of the entire human race.\n\nWhy do the machines initiate the re-building of Zion?\n\nA: As was discussed elsewhere on the site, the machines are not interested in humans being wiped out, they (as shown in Animatrix) just want to co-exist with humans.\nAs such, the machines allow the building of new Zion so that humans would not - as the quote you provided says - would not go extinct.\n"]], "1_corpus": "stackexchange"}
34
+ {"tstamp": 1723244189.7675, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d2d08c14b2ca445ebbf5e81ecf0b54c3", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Self-ensemble and self-distillation techniques for enhancing BERT performance", "0_output": [["Self-ensemble and self-distillation techniques for enhancing BERT performance", "Title: Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation\n\nAbstract: Fine-tuning pre-trained language models like BERT has become an effective way in NLP and yields state-of-the-art results on many downstream tasks. Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure, re-designing the pre-train tasks, and leveraging external data and knowledge. The fine-tuning strategy itself has yet to be fully explored. In this paper, we improve the fine-tuning of BERT with two effective mechanisms: self-ensemble and self-distillation. The experiments on text classification and natural language inference tasks show our proposed methods can significantly improve the adaption of BERT without any external data or knowledge."]], "0_corpus": "arxiv", "1_conv_id": "bcbe43adcdf64637853373f624ae96f5", "1_model_name": "BM25", "1_prompt": "Self-ensemble and self-distillation techniques for enhancing BERT performance", "1_output": [["Self-ensemble and self-distillation techniques for enhancing BERT performance", "Title: Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation\n\nAbstract: Fine-tuning pre-trained language models like BERT has become an effective way in NLP and yields state-of-the-art results on many downstream tasks. Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure, re-designing the pre-train tasks, and leveraging external data and knowledge. The fine-tuning strategy itself has yet to be fully explored. In this paper, we improve the fine-tuning of BERT with two effective mechanisms: self-ensemble and self-distillation. The experiments on text classification and natural language inference tasks show our proposed methods can significantly improve the adaption of BERT without any external data or knowledge."]], "1_corpus": "arxiv"}
data/retrieval_individual-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -72,3 +72,25 @@
72
  {"tstamp": 1723243980.07, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723243979.7172, "finish": 1723243980.07, "ip": "", "conv_id": "d412622511fb403a8d643132e50023b5", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Survey on the integration of multiple data types including text, audio, and images in language models", "output": [["Survey on the integration of multiple data types including text, audio, and images in language models", "Title: Multimodal Large Language Models: A Survey\n\nAbstract: The exploration of multimodal language models integrates multiple data types, such as images, text, language, audio, and other heterogeneity. While the latest large language models excel in text-based tasks, they often struggle to understand and process other data types. Multimodal models address this limitation by combining various modalities, enabling a more comprehensive understanding of diverse data. This paper begins by defining the concept of multimodal and examining the historical development of multimodal algorithms. Furthermore, we introduce a range of multimodal products, focusing on the efforts of major technology companies. A practical guide is provided, offering insights into the technical aspects of multimodal models. Moreover, we present a compilation of the latest algorithms and commonly used datasets, providing researchers with valuable resources for experimentation and evaluation. Lastly, we explore the applications of multimodal models and discuss the challenges associated with their development. By addressing these aspects, this paper aims to facilitate a deeper understanding of multimodal models and their potential in various domains."]], "corpus": "arxiv"}
73
  {"tstamp": 1723244000.2866, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723244000.0539, "finish": 1723244000.2866, "ip": "", "conv_id": "45909c127d814bd0ac23db607124814e", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "A real-time panoptic segmentation model achieving near SoTA performance.", "output": [["A real-time panoptic segmentation model achieving near SoTA performance.", "Title: Real-Time Panoptic Segmentation from Dense Detections\n\nAbstract: Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution. Current state-of-the-art approaches cannot run in real-time, and simplifying these architectures to improve efficiency severely degrades their accuracy. In this paper, we propose a new single-shot panoptic segmentation network that leverages dense detections and a global self-attention mechanism to operate in real-time with performance approaching the state of the art. We introduce a novel parameter-free mask construction method that substantially reduces computational complexity by efficiently reusing information from the object detection and semantic segmentation sub-tasks. The resulting network has a simple data flow that does not require feature map re-sampling or clustering post-processing, enabling significant hardware acceleration. Our experiments on the Cityscapes and COCO benchmarks show that our network works at 30 FPS on 1024x2048 resolution, trading a 3% relative performance degradation from the current state of the art for up to 440% faster inference."]], "corpus": "arxiv"}
74
  {"tstamp": 1723244000.2866, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723244000.0539, "finish": 1723244000.2866, "ip": "", "conv_id": "4e3799095d84441b9e8c80514f764b7c", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A real-time panoptic segmentation model achieving near SoTA performance.", "output": [["A real-time panoptic segmentation model achieving near SoTA performance.", "Title: Real-Time Panoptic Segmentation from Dense Detections\n\nAbstract: Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution. Current state-of-the-art approaches cannot run in real-time, and simplifying these architectures to improve efficiency severely degrades their accuracy. In this paper, we propose a new single-shot panoptic segmentation network that leverages dense detections and a global self-attention mechanism to operate in real-time with performance approaching the state of the art. We introduce a novel parameter-free mask construction method that substantially reduces computational complexity by efficiently reusing information from the object detection and semantic segmentation sub-tasks. The resulting network has a simple data flow that does not require feature map re-sampling or clustering post-processing, enabling significant hardware acceleration. Our experiments on the Cityscapes and COCO benchmarks show that our network works at 30 FPS on 1024x2048 resolution, trading a 3% relative performance degradation from the current state of the art for up to 440% faster inference."]], "corpus": "arxiv"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  {"tstamp": 1723243980.07, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723243979.7172, "finish": 1723243980.07, "ip": "", "conv_id": "d412622511fb403a8d643132e50023b5", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Survey on the integration of multiple data types including text, audio, and images in language models", "output": [["Survey on the integration of multiple data types including text, audio, and images in language models", "Title: Multimodal Large Language Models: A Survey\n\nAbstract: The exploration of multimodal language models integrates multiple data types, such as images, text, language, audio, and other heterogeneity. While the latest large language models excel in text-based tasks, they often struggle to understand and process other data types. Multimodal models address this limitation by combining various modalities, enabling a more comprehensive understanding of diverse data. This paper begins by defining the concept of multimodal and examining the historical development of multimodal algorithms. Furthermore, we introduce a range of multimodal products, focusing on the efforts of major technology companies. A practical guide is provided, offering insights into the technical aspects of multimodal models. Moreover, we present a compilation of the latest algorithms and commonly used datasets, providing researchers with valuable resources for experimentation and evaluation. Lastly, we explore the applications of multimodal models and discuss the challenges associated with their development. By addressing these aspects, this paper aims to facilitate a deeper understanding of multimodal models and their potential in various domains."]], "corpus": "arxiv"}
73
  {"tstamp": 1723244000.2866, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723244000.0539, "finish": 1723244000.2866, "ip": "", "conv_id": "45909c127d814bd0ac23db607124814e", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "A real-time panoptic segmentation model achieving near SoTA performance.", "output": [["A real-time panoptic segmentation model achieving near SoTA performance.", "Title: Real-Time Panoptic Segmentation from Dense Detections\n\nAbstract: Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution. Current state-of-the-art approaches cannot run in real-time, and simplifying these architectures to improve efficiency severely degrades their accuracy. In this paper, we propose a new single-shot panoptic segmentation network that leverages dense detections and a global self-attention mechanism to operate in real-time with performance approaching the state of the art. We introduce a novel parameter-free mask construction method that substantially reduces computational complexity by efficiently reusing information from the object detection and semantic segmentation sub-tasks. The resulting network has a simple data flow that does not require feature map re-sampling or clustering post-processing, enabling significant hardware acceleration. Our experiments on the Cityscapes and COCO benchmarks show that our network works at 30 FPS on 1024x2048 resolution, trading a 3% relative performance degradation from the current state of the art for up to 440% faster inference."]], "corpus": "arxiv"}
74
  {"tstamp": 1723244000.2866, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723244000.0539, "finish": 1723244000.2866, "ip": "", "conv_id": "4e3799095d84441b9e8c80514f764b7c", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A real-time panoptic segmentation model achieving near SoTA performance.", "output": [["A real-time panoptic segmentation model achieving near SoTA performance.", "Title: Real-Time Panoptic Segmentation from Dense Detections\n\nAbstract: Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution. Current state-of-the-art approaches cannot run in real-time, and simplifying these architectures to improve efficiency severely degrades their accuracy. In this paper, we propose a new single-shot panoptic segmentation network that leverages dense detections and a global self-attention mechanism to operate in real-time with performance approaching the state of the art. We introduce a novel parameter-free mask construction method that substantially reduces computational complexity by efficiently reusing information from the object detection and semantic segmentation sub-tasks. The resulting network has a simple data flow that does not require feature map re-sampling or clustering post-processing, enabling significant hardware acceleration. Our experiments on the Cityscapes and COCO benchmarks show that our network works at 30 FPS on 1024x2048 resolution, trading a 3% relative performance degradation from the current state of the art for up to 440% faster inference."]], "corpus": "arxiv"}
75
+ {"tstamp": 1723244011.3459, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1723244008.8211, "finish": 1723244011.3459, "ip": "", "conv_id": "41279714f790440282538f663f939612", "model_name": "text-embedding-3-large", "prompt": "what is the difference between governor and chief minister in india", "output": [["what is the difference between governor and chief minister in india", "Chief minister (India)\n\nIn India, a chief minister is the elected head of government of each state out of the 28 states and sometimes a union territory (UT). Currently, only the UTs of Delhi, Jammu and Kashmir (currently vacant) and Puducherry have serving chief ministers. According to the Constitution of India, the governor is a state's head, but de facto executive authority rests with the chief minister."]], "corpus": "wikipedia"}
76
+ {"tstamp": 1723244011.3459, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1723244008.8211, "finish": 1723244011.3459, "ip": "", "conv_id": "bb09a98c25f84c16a56935a69069929c", "model_name": "embed-english-v3.0", "prompt": "what is the difference between governor and chief minister in india", "output": [["what is the difference between governor and chief minister in india", "Governor (India)\n\nIn India, a governor is the constitutional head of a state of India that has similar powers and functions at the state level as those of the president of India at the central level. Governors exist in the states, while lieutenant governors and administrators exist in union territories of Delhi and Puducherry and other union territories. A governor acts as the constitutional head and takes all their decisions based on the advice of chief minister and their council of ministers.\nIn India, a lieutenant governor (LG) and administrator is a constitutional head of a union territory. However, the LG rank is present only in the union territories of Andaman and Nicobar Islands, Ladakh, Jammu and Kashmir, Delhi and Puducherry.\nQualifications, appointment and term of office\nQualifications\nArticle 157 and Article 158 of the Constitution of India specify eligibility requirements for the post of governor. They are as follows:\nA governor:\nmust be at least 35 years of age.\nshould not be a member of the either house of the parliament or house of the state legislature.\nshould not hold any office of profit.\nTraditionally, governors are not appointed to lead the states where they reside, although this is not stipulated in the constitution.\nAppointment\nThe President of India appoints the governor of each State. The factors based on which the candidates are evaluated is not mentioned in the Constitution."]], "corpus": "wikipedia"}
77
+ {"tstamp": 1723244068.7133, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723244068.472, "finish": 1723244068.7133, "ip": "", "conv_id": "e73bd986ee534f689fe280dddb2505a3", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "what is the difference between a in school?", "output": [["what is the difference between a in school?", "Ell Stackexchange\n\nQ: What is the difference between \"at school\" and \"at the school\"? What is the difference in meaning between at school and at the school?\n\nA: When we say \nShe is at school.\nwe are not identifying or specifying which school. She is not at home, she is at school, whichever school it may happen to be.\nWhen we say\nShe is at the school.\nwe are indicating that we have a particular school in mind, which the listener knows either from something said earlier in the conversation or from some other contextual information.\n"]], "corpus": "stackexchange"}
78
+ {"tstamp": 1723244068.7133, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723244068.472, "finish": 1723244068.7133, "ip": "", "conv_id": "e7279629001d4e288ffffc5405c1ea76", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "what is the difference between a in school?", "output": [["what is the difference between a in school?", "Ell Stackexchange\n\nQ: Difference between \"I'm in the school\" and \"I'm in school\" What is the difference between these two sentences??\n\nI'm in the school.\n I'm in school.\n\nDo they have the same meaning or not?\n\nA: \nI'm in the school.\n\nMeans I am in the building (but I may not be a student).\n\nI'm in school.\n\nMeans I am attending school (I'm a student).\n"]], "corpus": "stackexchange"}
79
+ {"tstamp": 1723244078.1342, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1723244077.9387, "finish": 1723244078.1342, "ip": "", "conv_id": "64bc81751d08467980a9ca14bab6a734", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "what is akmod in fedora?", "output": [["what is akmod in fedora?", "Superuser Stackexchange\n\nQ: Update kernel module option on Fedora 17 There is a kernel module hid_apple, and I need to set its option fnmode to 2.\nSo, I've created /etc/modprobe.d/hid_apple.conf file with such content\noptions hid_apple fnmode=2\nThen run \ndracut -f /boot/initramfs-$(uname -r).img $(uname -r)\n\nAs I understood now, the value in /sys/module/hid_apple/parameters/fnmode should be 2, but there is still default value (1). It still there even after reboot. \nWhat is wrong, and how could I fix it?\n\nA: The problem is that hid_apple is not compiled as a module - the Fedora kernel has it builtin to the main kernel image.\nSo you can't use modprobe.d to change the options as that only applies to modules.\nWhat you will need to do is to edit /etc/grub2.cfg and find the which starts with linux... for the kernel you are using and add hid_apple.fnmode=2 to the end of the line.\n"]], "corpus": "stackexchange"}
80
+ {"tstamp": 1723244078.1342, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723244077.9387, "finish": 1723244078.1342, "ip": "", "conv_id": "898df41a2fc349d7b829731b00378b5d", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "what is akmod in fedora?", "output": [["what is akmod in fedora?", "Unix Stackexchange\n\nQ: Is there is a PPA-service equivalent in the Fedora world? For Ubuntu there is this convenient PPA service available where you can get your own package repository. \nYou can put there source packages which are under an open source license, e.g. development snapshots of your software.\n(The server takes care of creating the right binary package and signs it with the repository key.)\nUsers then only need to add the repository address with one command and can install the packages via apt-get.\nSince I am looking into Fedora - is there is something like this available for Fedora packages?\nIf yes, what architectures/distribution versions are supported? How much space is available?\nWhat is the way to publish packages for multiple rpm distributions via such a service (e.g. current Fedora version and RHEL)?\n\nA: For Fedora their is: http://repos.fedorapeople.org/\n"]], "corpus": "stackexchange"}
81
+ {"tstamp": 1723244088.9123, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1723244088.7537, "finish": 1723244088.9123, "ip": "", "conv_id": "56d4265fc51347dd9bc4d34c295bf012", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "What is a large language model?", "output": [["What is a large language model?", "Stackoverflow Stackexchange\n\nQ: Topic Modeling tool for large data set (30GB) I'm looking for some topic modeling tool which can be applicable to a large data set. \nMy current data set for training is 30 GB. I tried MALLET topic modeling, but always I got OutOfMemoryError. \nIf you have any tips, please let me know. \n\nA: There are many options available to you, and this response is agnostic as to how they compare.\nI think that the important thing with such a large dataset is the method of approximate posterior inference used, and not necessarily the software implementation. According to this paper, online Variational Bayes inference is much more efficient, in terms of time and space, than Gibbs sampling. Though I've never used it, the gensim package looks good. It's in python, and there are in-depth tutorials at the project's webpage.\nFor code that comes straight from the source, see the webpage of David Blei, one of the authors of the LDA model, here. He links to more than a few implementations, in a variety of languages (R, Java, C++). \n"]], "corpus": "stackexchange"}
82
+ {"tstamp": 1723244088.9123, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723244088.7537, "finish": 1723244088.9123, "ip": "", "conv_id": "7aa0b9eb67dd49edb233a9d4439319c9", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "What is a large language model?", "output": [["What is a large language model?", "Stackoverflow Stackexchange\n\nQ: How do I build a large-vocabulary language model for CMU Sphinx? I would like to build a language model for CMU Sphinx, but my corpus has more than 1000 words so I cannot use the online tool. How do I use (the scripts in cmuclmtk?) to build my language model?\n\nA: Please read the tutorial\nhttp://cmusphinx.sourceforge.net/wiki/tutoriallm\n"]], "corpus": "stackexchange"}
83
+ {"tstamp": 1723244102.6171, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723244102.3361, "finish": 1723244102.6171, "ip": "", "conv_id": "919884a738c2431388c46cc3a20dc7c2", "model_name": "GritLM/GritLM-7B", "prompt": "A recent paper improving low-bit quantization performance for Transformer-based NLP models.", "output": [["A recent paper improving low-bit quantization performance for Transformer-based NLP models.", "Title: Zero-Shot Dynamic Quantization for Transformer Inference\n\nAbstract: We introduce a novel run-time method for significantly reducing the accuracy loss associated with quantizing BERT-like models to 8-bit integers. Existing methods for quantizing models either modify the training procedure,or they require an additional calibration step to adjust parameters that also requires a selected held-out dataset. Our method permits taking advantage of quantization without the need for these adjustments. We present results on several NLP tasks demonstrating the usefulness of this technique."]], "corpus": "arxiv"}
84
+ {"tstamp": 1723244102.6171, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723244102.3361, "finish": 1723244102.6171, "ip": "", "conv_id": "20aa45dc40674cc09962d93f51ebf214", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "A recent paper improving low-bit quantization performance for Transformer-based NLP models.", "output": [["A recent paper improving low-bit quantization performance for Transformer-based NLP models.", "Title: Mixed Precision Low-bit Quantization of Neural Network Language Models for Speech Recognition\n\nAbstract: State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications. Low-bit neural network quantization provides a powerful solution to dramatically reduce their model size. Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors. To this end, novel mixed precision neural network LM quantization methods are proposed in this paper. The optimal local precision choices for LSTM-RNN and Transformer based neural LMs are automatically learned using three techniques. The first two approaches are based on quantization sensitivity metrics in the form of either the KL-divergence measured between full precision and quantized LMs, or Hessian trace weighted quantization perturbation that can be approximated efficiently using matrix free techniques. The third approach is based on mixed precision neural architecture search. In order to overcome the difficulty in using gradient descent methods to directly estimate discrete quantized weights, alternating direction methods of multipliers (ADMM) are used to efficiently train quantized LMs. Experiments were conducted on state-of-the-art LF-MMI CNN-TDNN systems featuring speed perturbation, i-Vector and learning hidden unit contribution (LHUC) based speaker adaptation on two tasks: Switchboard telephone speech and AMI meeting transcription. The proposed mixed precision quantization techniques achieved \"lossless\" quantization on both tasks, by producing model size compression ratios of up to approximately 16 times over the full precision LSTM and Transformer baseline LMs, while incurring no statistically significant word error rate increase."]], "corpus": "arxiv"}
85
+ {"tstamp": 1723244138.0431, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1723244137.9038, "finish": 1723244138.0431, "ip": "", "conv_id": "d32e010f627a4c818e731a9668dd96c6", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "QLoRA", "output": [["QLoRA", "Title: A compilation of known QSOs for the Gaia mission\n\nAbstract: Quasars are essential for astrometric in the sense that they are spatial stationary because of their large distance from the Sun. The European Space Agency (ESA) space astrometric satellite Gaia is scanning the whole sky with unprecedented accuracy up to a few muas level. However, Gaia's two fields of view observations strategy may introduce a parallax bias in the Gaia catalog. Since it presents no significant parallax, quasar is perfect nature object to detect such bias. More importantly, quasars can be used to construct a Celestial Reference Frame in the optical wavelengths in Gaia mission. In this paper, we compile the most reliable quasars existing in literatures. The final compilation (designated as Known Quasars Catalog for Gaia mission, KQCG) contains 1843850 objects, among of them, 797632 objects are found in Gaia DR1 after cross-identifications. This catalog will be very useful in Gaia mission."]], "corpus": "arxiv"}
86
+ {"tstamp": 1723244138.0431, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723244137.9038, "finish": 1723244138.0431, "ip": "", "conv_id": "61b36b18e2c940cbbbfdda6bae24d0c4", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "QLoRA", "output": [["QLoRA", "Title: Long-distance continuous-variable quantum key distribution with feasible physical noiseless linear amplifiers\n\nAbstract: Noiseless linear amplifiers (NLAs) provide a powerful tool to achieve long-distance continuous-variable quantum key distribution (CV-QKD) in the presence of realistic setups with non unit reconciliation efficiency. We address a NLA-assisted CV-QKD protocol implemented via realistic physical NLAs, namely, quantum scissors (QS) and single-photon catalysis (SPC), and compare their performance with respect to the ideal NLA $g^{\\hat{n}}$. We investigate also the robustness of two schemes against inefficient conditional detection, and discuss the two alternative scenarios in which the gain associated with the NLA is either fixed or optimized."]], "corpus": "arxiv"}
87
+ {"tstamp": 1723244148.9436, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723244148.5326, "finish": 1723244148.9436, "ip": "", "conv_id": "ca31bc4e973f45dea5a8eaba94656d62", "model_name": "GritLM/GritLM-7B", "prompt": "QLoRA Quantized LoRA", "output": [["QLoRA Quantized LoRA", "Title: LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models\n\nAbstract: Quantization is an indispensable technique for serving Large Language Models (LLMs) and has recently found its way into LoRA fine-tuning. In this work we focus on the scenario where quantization and LoRA fine-tuning are applied together on a pre-trained model. In such cases it is common to observe a consistent gap in the performance on downstream tasks between full fine-tuning and quantization plus LoRA fine-tuning approach. In response, we propose LoftQ (LoRA-Fine-Tuning-aware Quantization), a novel quantization framework that simultaneously quantizes an LLM and finds a proper low-rank initialization for LoRA fine-tuning. Such an initialization alleviates the discrepancy between the quantized and full-precision model and significantly improves generalization in downstream tasks. We evaluate our method on natural language understanding, question answering, summarization, and natural language generation tasks. Experiments show that our method is highly effective and outperforms existing quantization methods, especially in the challenging 2-bit and 2/4-bit mixed precision regimes. The code is available on https://github.com/yxli2123/LoftQ."]], "corpus": "arxiv"}
88
+ {"tstamp": 1723244148.9436, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1723244148.5326, "finish": 1723244148.9436, "ip": "", "conv_id": "e19ff1daec614d62b02722b17aca3545", "model_name": "voyage-multilingual-2", "prompt": "QLoRA Quantized LoRA", "output": [["QLoRA Quantized LoRA", "Title: Implementation and Evaluation of Physical Layer Key Generation on SDR based LoRa Platform\n\nAbstract: Physical layer key generation technology which leverages channel randomness to generate secret keys has attracted extensive attentions in long range (LoRa)-based networks recently. We in this paper develop a software-defined radio (SDR) based LoRa communications platform using GNU Radio on universal software radio peripheral (USRP) to implement and evaluate typical physical layer key generation schemes. Thanks to the flexibility and configurability of GNU Radio to extract LoRa packets, we are able to obtain the fine-grained channel frequency response (CFR) through LoRa preamble based channel estimation for key generation. Besides, we propose a lowcomplexity preprocessing method to enhance the randomness of quantization while reducing the secret key disagreement ratio. The results indicate that we can achieve 367 key bits with a high level of randomness through just a single effective channel probing in an indoor environment at a distance of 2 meters under the circumstance of a spreading factor (SF) of 7, a preamble length of 8, a signal bandwidth of 250 kHz, and a sampling rate of 1 MHz."]], "corpus": "arxiv"}
89
+ {"tstamp": 1723244168.4601, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723244168.2211, "finish": 1723244168.4601, "ip": "", "conv_id": "62a5066c4b074dcd88be00db87b99279", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
90
+ {"tstamp": 1723244168.4601, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1723244168.2211, "finish": 1723244168.4601, "ip": "", "conv_id": "6fcd197b135044a7a068a5508ee30a06", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
91
+ {"tstamp": 1723244174.0603, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1723244173.7275, "finish": 1723244174.0603, "ip": "", "conv_id": "f095c576fd2240f3a515c5e1e7cdf7ef", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Stackoverflow Stackexchange\n\nQ: Can't install gatsby-plugin-sharp.(libpng-dev may not installed) OS: Win 10\nwindows-build-tools is installed.\nWhen I was trying to install gatsby-plugin-sharp with npm install --save gatsby-plugin-sharp. I received two errors.\nI've tried this npm install --save-dev pngquant --msvs_version=2013 as someone mentioned. But it caused the same error with no 'libpng-dev'.\n\\> [email protected] postinstall C:\\Users\\54391\\Documents\\code\\personal-blog\\node_modules\\mozjpeg\n\\> node lib/install.js\n\n‼ getaddrinfo ENOENT raw.githubusercontent.com\n‼ mozjpeg pre-build test failed\ni compiling from source\n× Error: Command failed: C:\\Windows\\system32\\cmd.exe /s /c \"autoreconf -fiv\"\n'autoreconf' �����ڲ����ⲿ���Ҳ���ǿ����еij���\n���������ļ���\n\nat C:\\Users\\54391\\Documents\\code\\personal-blog\\node_modules\\bin-build\\node_modules\\execa\\index.js:231:11\nat runMicrotasks (<anonymous>)\nat processTicksAndRejections (internal/process/task_queues.js:97:5)\nat async Promise.all (index 0)\n\n\\> [email protected] postinstall C:\\Users\\54391\\Documents\\code\\personal-blog\\node_modules\\pngquant-bin\n\\> node lib/install.js\n\n‼ getaddrinfo ENOENT raw.githubusercontent.com\n‼ pngquant pre-build test failed\ni compiling from source\n× Error: pngquant failed to build, make sure that libpng-dev is installed\nat C:\\Users\\54391\\Documents\\code\\personal-blog\\node_modules\\bin-build\\node_modules\\execa\\index.js:231:11\nat processTicksAndRejections (internal/process/task_queues.js:97:5)\nat async Promise.all (index 0)\n\n"]], "corpus": "stackexchange"}
92
+ {"tstamp": 1723244174.0603, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1723244173.7275, "finish": 1723244174.0603, "ip": "", "conv_id": "85875ef025614bdcba923a3349cbb1ac", "model_name": "voyage-multilingual-2", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why do the machines allow/initiate the building of a new Zion? In the Matrix Reloaded, the Architect says:\n\nThe function of the One is now to return to the Source, allowing a temporary dissemination of the code you carry, reinserting the prime program. After which, you will be required to select from the Matrix 23 individuals - 16 female, 7 male - to rebuild Zion. Failure to comply with this process will result in a cataclysmic system crash, killing everyone connected to the Matrix, which, coupled with the extermination of Zion, will ultimately result in the extinction of the entire human race.\n\nWhy do the machines initiate the re-building of Zion?\n\nA: As was discussed elsewhere on the site, the machines are not interested in humans being wiped out, they (as shown in Animatrix) just want to co-exist with humans.\nAs such, the machines allow the building of new Zion so that humans would not - as the quote you provided says - would not go extinct.\n"]], "corpus": "stackexchange"}
93
+ {"tstamp": 1723244187.2631, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723244185.8436, "finish": 1723244187.2631, "ip": "", "conv_id": "d2d08c14b2ca445ebbf5e81ecf0b54c3", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Self-ensemble and self-distillation techniques for enhancing BERT performance", "output": [["Self-ensemble and self-distillation techniques for enhancing BERT performance", "Title: Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation\n\nAbstract: Fine-tuning pre-trained language models like BERT has become an effective way in NLP and yields state-of-the-art results on many downstream tasks. Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure, re-designing the pre-train tasks, and leveraging external data and knowledge. The fine-tuning strategy itself has yet to be fully explored. In this paper, we improve the fine-tuning of BERT with two effective mechanisms: self-ensemble and self-distillation. The experiments on text classification and natural language inference tasks show our proposed methods can significantly improve the adaption of BERT without any external data or knowledge."]], "corpus": "arxiv"}
94
+ {"tstamp": 1723244187.2631, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1723244185.8436, "finish": 1723244187.2631, "ip": "", "conv_id": "bcbe43adcdf64637853373f624ae96f5", "model_name": "BM25", "prompt": "Self-ensemble and self-distillation techniques for enhancing BERT performance", "output": [["Self-ensemble and self-distillation techniques for enhancing BERT performance", "Title: Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation\n\nAbstract: Fine-tuning pre-trained language models like BERT has become an effective way in NLP and yields state-of-the-art results on many downstream tasks. Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure, re-designing the pre-train tasks, and leveraging external data and knowledge. The fine-tuning strategy itself has yet to be fully explored. In this paper, we improve the fine-tuning of BERT with two effective mechanisms: self-ensemble and self-distillation. The experiments on text classification and natural language inference tasks show our proposed methods can significantly improve the adaption of BERT without any external data or knowledge."]], "corpus": "arxiv"}
95
+ {"tstamp": 1723244196.0621, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1723244193.9259, "finish": 1723244196.0621, "ip": "", "conv_id": "edf26e4668b8440bb12335c91925049e", "model_name": "text-embedding-3-large", "prompt": "what is the general structure of an url", "output": [["what is the general structure of an url", "URL\n\nThe format combines the pre-existing system of domain names (created in 1985) with file path syntax, where slashes are used to separate directory and filenames. Conventions already existed where server names could be prefixed to complete file paths, preceded by a double slash (//).\nBerners-Lee later expressed regret at the use of dots to separate the parts of the domain name within URIs, wishing he had used slashes throughout, and also said that, given the colon following the first component of a URI, the two slashes before the domain name were unnecessary.\nEarly WorldWideWeb collaborators including Berners-Lee originally proposed the use of UDIs: Universal Document Identifiers.\nAn early (1993) draft of the HTML Specification referred to \"Universal\" Resource Locators. This was dropped some time between June 1994 () and October 1994 (draft-ietf-uri-url-08.txt). In his book Weaving the Web, Berners-Lee emphasizes his preference for the original inclusion of \"universal\" in the expansion rather than the word \"uniform\", to which it was later changed, and he gives a brief account of the contention that led to the change.\nSyntax\nEvery HTTP URL conforms to the syntax of a generic URI.\nA web browser will usually dereference a URL by performing an HTTP request to the specified host, by default on port number 80. URLs using the https scheme require that requests and responses be made over a secure connection to the website."]], "corpus": "wikipedia"}
96
+ {"tstamp": 1723244196.0621, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723244193.9259, "finish": 1723244196.0621, "ip": "", "conv_id": "4f953dd6548a4b8ebda5302c212d74ad", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "what is the general structure of an url", "output": [["what is the general structure of an url", "Uniform Resource Identifier\n\nAs such, a URL is simply a URI that happens to point to a resource over a network. However, in non-technical contexts and in software for the World Wide Web, the term \"URL\" remains widely used. Additionally, the term \"web address\" (which has no formal definition) often occurs in non-technical publications as a synonym for a URI that uses the http or https schemes. Such assumptions can lead to confusion, for example, in the case of XML namespaces that have a visual similarity to resolvable URIs.\nSpecifications produced by the WHATWG prefer URL over URI, and so newer HTML5 APIs use URL over URI.\nWhile most URI schemes were originally designed to be used with a particular protocol, and often have the same name, they are semantically different from protocols. For example, the scheme http is generally used for interacting with web resources using HTTP, but the scheme file has no protocol.\nSyntax\nA URI has a scheme that refers to a specification for assigning identifiers within that scheme. As such, the URI syntax is a federated and extensible naming system wherein each scheme's specification may further restrict the syntax and semantics of identifiers using that scheme. The URI generic syntax is a superset of the syntax of all URI schemes. It was first defined in , published in August 1998, and finalized in , published in January 2005."]], "corpus": "wikipedia"}