Title stringlengths 16 196 | Authors stringlengths 6 6.27k | Abstract stringlengths 242 1.92k | entry_id stringlengths 33 33 | Date timestamp[ns, tz=UTC] | Categories stringclasses 597
values | year int32 2.02k 2.02k |
|---|---|---|---|---|---|---|
Open Source Language Models Can Provide Feedback: Evaluating LLMs' Ability to Help Students Using GPT-4-As-A-Judge | Charles Koutcheme, Nicola Dainese, Sami Sarsa, Arto Hellas, Juho Leinonen, Paul Denny | Large language models (LLMs) have shown great potential for the automatic generation of feedback in a wide range of computing contexts. However, concerns have been voiced around the privacy and ethical implications of sending student work to proprietary models. This has sparked considerable interest in the use of open ... | http://arxiv.org/abs/2405.05253v1 | 2024-05-08T17:57:39 | cs.CL, cs.AI, cs.CY | 2,024 |
QFMTS: Generating Query-Focused Summaries over Multi-Table Inputs | Weijia Zhang, Vaishali Pal, Jia-Hong Huang, Evangelos Kanoulas, Maarten de Rijke | Table summarization is a crucial task aimed at condensing information from tabular data into concise and comprehensible textual summaries. However, existing approaches often fall short of adequately meeting users' information and quality requirements and tend to overlook the complexities of real-world queries. In this ... | http://arxiv.org/abs/2405.05109v1 | 2024-05-08T15:05:55 | cs.CL, cs.AI | 2,024 |
Seeds of Stereotypes: A Large-Scale Textual Analysis of Race and Gender Associations with Diseases in Online Sources | Lasse Hyldig Hansen, Nikolaj Andersen, Jack Gallifant, Liam G. McCoy, James K Stone, Nura Izath, Marcela Aguirre-Jerez, Danielle S Bitterman, Judy Gichoya, Leo Anthony Celi | Background Advancements in Large Language Models (LLMs) hold transformative potential in healthcare, however, recent work has raised concern about the tendency of these models to produce outputs that display racial or gender biases. Although training data is a likely source of such biases, exploration of disease and de... | http://arxiv.org/abs/2405.05049v1 | 2024-05-08T13:38:56 | cs.CL | 2,024 |
ADELIE: Aligning Large Language Models on Information Extraction | Yunjia Qi, Hao Peng, Xiaozhi Wang, Bin Xu, Lei Hou, Juanzi Li | Large language models (LLMs) usually fall short on information extraction (IE) tasks and struggle to follow the complex instructions of IE tasks. This primarily arises from LLMs not being aligned with humans, as mainstream alignment datasets typically do not include IE data. In this paper, we introduce ADELIE (Aligning... | http://arxiv.org/abs/2405.05008v1 | 2024-05-08T12:24:52 | cs.CL | 2,024 |
Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID | Wentao Tan, Changxing Ding, Jiayu Jiang, Fei Wang, Yibing Zhan, Dapeng Tao | Text-to-image person re-identification (ReID) retrieves pedestrian images according to textual descriptions. Manually annotating textual descriptions is time-consuming, restricting the scale of existing datasets and therefore the generalization ability of ReID models. As a result, we study the transferable text-to-imag... | http://arxiv.org/abs/2405.04940v1 | 2024-05-08T10:15:04 | cs.CV | 2,024 |
Traj-LLM: A New Exploration for Empowering Trajectory Prediction with Pre-trained Large Language Models | Zhengxing Lan, Hongbo Li, Lingshan Liu, Bo Fan, Yisheng Lv, Yilong Ren, Zhiyong Cui | Predicting the future trajectories of dynamic traffic actors is a cornerstone task in autonomous driving. Though existing notable efforts have resulted in impressive performance improvements, a gap persists in scene cognitive and understanding of the complex traffic semantics. This paper proposes Traj-LLM, the first to... | http://arxiv.org/abs/2405.04909v1 | 2024-05-08T09:28:04 | cs.CV, cs.AI | 2,024 |
ACORN: Aspect-wise Commonsense Reasoning Explanation Evaluation | Ana Brassard, Benjamin Heinzerling, Keito Kudo, Keisuke Sakaguchi, Kentaro Inui | Evaluating free-text explanations is a multifaceted, subjective, and labor-intensive task. Large language models (LLMs) present an appealing alternative due to their potential for consistency, scalability, and cost-efficiency. In this work, we present ACORN, a new dataset of 3,500 free-text explanations and aspect-wise... | http://arxiv.org/abs/2405.04818v1 | 2024-05-08T05:36:52 | cs.CL | 2,024 |
Zero-shot LLM-guided Counterfactual Generation for Text | Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu | Counterfactual examples are frequently used for model development and evaluation in many natural language processing (NLP) tasks. Although methods for automated counterfactual generation have been explored, such methods depend on models such as pre-trained language models that are then fine-tuned on auxiliary, often ta... | http://arxiv.org/abs/2405.04793v1 | 2024-05-08T03:57:45 | cs.CL, cs.AI, cs.LG | 2,024 |
CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization | Zheyan Qu, Lu Yin, Zitong Yu, Wenbo Wang, Xing zhang | Large language models (LLMs) have demonstrated astonishing capabilities in natural language processing (NLP) tasks, sparking interest in their application to professional domains with higher specialized requirements. However, restricted access to closed-source LLMs via APIs and the difficulty in collecting massive high... | http://arxiv.org/abs/2405.04781v1 | 2024-05-08T03:11:12 | cs.CL | 2,024 |
Large Language Models for Cyber Security: A Systematic Literature Review | HanXiang Xu, ShenAo Wang, Ningke Li, Yanjie Zhao, Kai Chen, Kailong Wang, Yang Liu, Ting Yu, HaoYu Wang | The rapid advancement of Large Language Models (LLMs) has opened up new opportunities for leveraging artificial intelligence in various domains, including cybersecurity. As the volume and sophistication of cyber threats continue to grow, there is an increasing need for intelligent systems that can automatically detect ... | http://arxiv.org/abs/2405.04760v1 | 2024-05-08T02:09:17 | cs.CR, cs.AI | 2,024 |
AttacKG+:Boosting Attack Knowledge Graph Construction with Large Language Models | Yongheng Zhang, Tingwen Du, Yunshan Ma, Xiang Wang, Yi Xie, Guozheng Yang, Yuliang Lu, Ee-Chien Chang | Attack knowledge graph construction seeks to convert textual cyber threat intelligence (CTI) reports into structured representations, portraying the evolutionary traces of cyber attacks. Even though previous research has proposed various methods to construct attack knowledge graphs, they generally suffer from limited g... | http://arxiv.org/abs/2405.04753v1 | 2024-05-08T01:41:25 | cs.CR, cs.AI | 2,024 |
S-EQA: Tackling Situational Queries in Embodied Question Answering | Vishnu Sashank Dorbala, Prasoon Goyal, Robinson Piramuthu, Michael Johnston, Dinesh Manocha, Reza Ghanadhan | We present and tackle the problem of Embodied Question Answering (EQA) with Situational Queries (S-EQA) in a household environment. Unlike prior EQA work tackling simple queries that directly reference target objects and quantifiable properties pertaining them, EQA with situational queries (such as "Is the bathroom cle... | http://arxiv.org/abs/2405.04732v1 | 2024-05-08T00:45:20 | cs.RO, cs.AI | 2,024 |
LLMs Can Patch Up Missing Relevance Judgments in Evaluation | Shivani Upadhyay, Ehsan Kamalloo, Jimmy Lin | Unjudged documents or holes in information retrieval benchmarks are considered non-relevant in evaluation, yielding no gains in measuring effectiveness. However, these missing judgments may inadvertently introduce biases into the evaluation as their prevalence for a retrieval model is heavily contingent on the pooling ... | http://arxiv.org/abs/2405.04727v1 | 2024-05-08T00:32:19 | cs.IR | 2,024 |
Remote Diffusion | Kunal Sunil Kasodekar | I explored adapting Stable Diffusion v1.5 for generating domain-specific satellite and aerial images in remote sensing. Recognizing the limitations of existing models like Midjourney and Stable Diffusion, trained primarily on natural RGB images and lacking context for remote sensing, I used the RSICD dataset to train a... | http://arxiv.org/abs/2405.04717v1 | 2024-05-07T23:44:09 | cs.CV | 2,024 |
Bridging the Bosphorus: Advancing Turkish Large Language Models through Strategies for Low-Resource Language Adaptation and Benchmarking | Emre Can Acikgoz, Mete Erdogan, Deniz Yuret | Large Language Models (LLMs) are becoming crucial across various fields, emphasizing the urgency for high-quality models in underrepresented languages. This study explores the unique challenges faced by low-resource languages, such as data scarcity, model selection, evaluation, and computational limitations, with a spe... | http://arxiv.org/abs/2405.04685v1 | 2024-05-07T21:58:45 | cs.CL, cs.AI, cs.LG | 2,024 |
D-NLP at SemEval-2024 Task 2: Evaluating Clinical Inference Capabilities of Large Language Models | Duygu Altinok | Large language models (LLMs) have garnered significant attention and widespread usage due to their impressive performance in various tasks. However, they are not without their own set of challenges, including issues such as hallucinations, factual inconsistencies, and limitations in numerical-quantitative reasoning. Ev... | http://arxiv.org/abs/2405.04170v1 | 2024-05-07T10:11:14 | cs.CL | 2,024 |
Optimizing Language Model's Reasoning Abilities with Weak Supervision | Yongqi Tong, Sizhe Wang, Dawei Li, Yifan Wang, Simeng Han, Zi Lin, Chengsong Huang, Jiaxin Huang, Jingbo Shang | While Large Language Models (LLMs) have demonstrated proficiency in handling complex queries, much of the past work has depended on extensively annotated datasets by human experts. However, this reliance on fully-supervised annotations poses scalability challenges, particularly as models and data requirements grow. To ... | http://arxiv.org/abs/2405.04086v1 | 2024-05-07T07:39:15 | cs.CL | 2,024 |
Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application | Jian Jia, Yipei Wang, Yan Li, Honggang Chen, Xuehan Bai, Zhaocheng Liu, Jian Liang, Quan Chen, Han Li, Peng Jiang, Kun Gai | Contemporary recommender systems predominantly rely on collaborative filtering techniques, employing ID-embedding to capture latent associations among users and items. However, this approach overlooks the wealth of semantic information embedded within textual descriptions of items, leading to suboptimal performance in ... | http://arxiv.org/abs/2405.03988v1 | 2024-05-07T04:00:30 | cs.IR, cs.AI | 2,024 |
Long Context Alignment with Short Instructions and Synthesized Positions | Wenhao Wu, Yizhong Wang, Yao Fu, Xiang Yue, Dawei Zhu, Sujian Li | Effectively handling instructions with extremely long context remains a challenge for Large Language Models (LLMs), typically necessitating high-quality long data and substantial computational resources. This paper introduces Step-Skipping Alignment (SkipAlign), a new technique designed to enhance the long-context capa... | http://arxiv.org/abs/2405.03939v1 | 2024-05-07T01:56:22 | cs.CL | 2,024 |
Self-Improving Customer Review Response Generation Based on LLMs | Guy Azov, Tatiana Pelc, Adi Fledel Alon, Gila Kamhi | Previous studies have demonstrated that proactive interaction with user reviews has a positive impact on the perception of app users and encourages them to submit revised ratings. Nevertheless, developers encounter challenges in managing a high volume of reviews, particularly in the case of popular apps with a substant... | http://arxiv.org/abs/2405.03845v1 | 2024-05-06T20:50:17 | cs.CL, cs.AI | 2,024 |
TOGLL: Correct and Strong Test Oracle Generation with LLMs | Soneya Binta Hossain, Matthew Dwyer | Test oracles play a crucial role in software testing, enabling effective bug detection. Despite initial promise, neural-based methods for automated test oracle generation often result in a large number of false positives and weaker test oracles. While LLMs have demonstrated impressive effectiveness in various software ... | http://arxiv.org/abs/2405.03786v1 | 2024-05-06T18:37:35 | cs.SE | 2,024 |
Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs | Muhammad Uzair Khattak, Muhammad Ferjad Naeem, Jameel Hassan, Muzammal Naseer, Federico Tombari, Fahad Shahbaz Khan, Salman Khan | Recent advancements in Large Language Models (LLMs) have led to the development of Video Large Multi-modal Models (Video-LMMs) that can handle a wide range of video understanding tasks. These models have the potential to be deployed in real-world applications such as robotics, AI assistants, medical imaging, and autono... | http://arxiv.org/abs/2405.03690v1 | 2024-05-06T17:59:45 | cs.CV | 2,024 |
Large Language Models Reveal Information Operation Goals, Tactics, and Narrative Frames | Keith Burghardt, Kai Chen, Kristina Lerman | Adversarial information operations can destabilize societies by undermining fair elections, manipulating public opinions on policies, and promoting scams. Despite their widespread occurrence and potential impacts, our understanding of influence campaigns is limited by manual analysis of messages and subjective interpre... | http://arxiv.org/abs/2405.03688v1 | 2024-05-06T17:59:07 | cs.CL, cs.LG | 2,024 |
Language-Image Models with 3D Understanding | Jang Hyun Cho, Boris Ivanovic, Yulong Cao, Edward Schmerling, Yue Wang, Xinshuo Weng, Boyi Li, Yurong You, Philipp Krähenbühl, Yan Wang, Marco Pavone | Multi-modal large language models (MLLMs) have shown incredible capabilities in a variety of 2D vision and language tasks. We extend MLLMs' perceptual capabilities to ground and reason about images in 3-dimensional space. To that end, we first develop a large-scale pre-training dataset for 2D and 3D called LV3D by comb... | http://arxiv.org/abs/2405.03685v1 | 2024-05-06T17:57:27 | cs.CV, cs.AI, cs.CL, cs.LG | 2,024 |
Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment | Abhinav Agarwalla, Abhay Gupta, Alexandre Marques, Shubhra Pandit, Michael Goin, Eldar Kurtic, Kevin Leong, Tuan Nguyen, Mahmoud Salem, Dan Alistarh, Sean Lie, Mark Kurtz | Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks. We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs that achieve full accuracy recovery for fine-tuning tasks at up to 70% sparsity. We achi... | http://arxiv.org/abs/2405.03594v1 | 2024-05-06T16:03:32 | cs.CL, cs.AI | 2,024 |
MAmmoTH2: Scaling Instructions from the Web | Xiang Yue, Tuney Zheng, Ge Zhang, Wenhu Chen | Instruction tuning improves the reasoning abilities of large language models (LLMs), with data quality and scalability being the crucial factors. Most instruction tuning data come from human crowd-sourcing or GPT-4 distillation. We propose a paradigm to efficiently harvest 10 million naturally existing instruction data... | http://arxiv.org/abs/2405.03548v1 | 2024-05-06T15:11:38 | cs.CL | 2,024 |
Are Human Rules Necessary? Generating Reusable APIs with CoT Reasoning and In-Context Learning | Yubo Mai, Zhipeng Gao, Xing Hu, Lingfeng Bao, Yu Liu, Jianling Sun | Inspired by the great potential of Large Language Models (LLMs) for solving complex coding tasks, in this paper, we propose a novel approach, named Code2API, to automatically perform APIzation for Stack Overflow code snippets. Code2API does not require additional model training or any manual crafting rules and can be e... | http://arxiv.org/abs/2405.03509v1 | 2024-05-06T14:22:17 | cs.SE | 2,024 |
Doing Personal LAPS: LLM-Augmented Dialogue Construction for Personalized Multi-Session Conversational Search | Hideaki Joko, Shubham Chatterjee, Andrew Ramsay, Arjen P. de Vries, Jeff Dalton, Faegheh Hasibi | The future of conversational agents will provide users with personalized information responses. However, a significant challenge in developing models is the lack of large-scale dialogue datasets that span multiple sessions and reflect real-world user preferences. Previous approaches rely on experts in a wizard-of-oz se... | http://arxiv.org/abs/2405.03480v1 | 2024-05-06T13:53:03 | cs.IR | 2,024 |
SEvenLLM: Benchmarking, Eliciting, and Enhancing Abilities of Large Language Models in Cyber Threat Intelligence | Hangyuan Ji, Jian Yang, Linzheng Chai, Chaoren Wei, Liqun Yang, Yunlong Duan, Yunli Wang, Tianzhen Sun, Hongcheng Guo, Tongliang Li, Changyu Ren, Zhoujun Li | To address the increasing complexity and frequency of cybersecurity incidents emphasized by the recent cybersecurity threat reports with over 10 billion instances, cyber threat intelligence (CTI) plays a critical role in the modern cybersecurity landscape by offering the insights required to understand and combat the c... | http://arxiv.org/abs/2405.03446v1 | 2024-05-06T13:17:43 | cs.CR | 2,024 |
Gaussian Stochastic Weight Averaging for Bayesian Low-Rank Adaptation of Large Language Models | Emre Onal, Klemens Flöge, Emma Caldwell, Arsen Sheverdin, Vincent Fortuin | Fine-tuned Large Language Models (LLMs) often suffer from overconfidence and poor calibration, particularly when fine-tuned on small datasets. To address these challenges, we propose a simple combination of Low-Rank Adaptation (LoRA) with Gaussian Stochastic Weight Averaging (SWAG), facilitating approximate Bayesian in... | http://arxiv.org/abs/2405.03425v1 | 2024-05-06T12:44:37 | cs.CL | 2,024 |
Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning | Qizhou Chen, Taolin Zhang, Xiaofeng He, Dongyang Li, Chengyu Wang, Longtao Huang, Hui Xue | Model editing aims to correct outdated or erroneous knowledge in large language models (LLMs) without the need for costly retraining. Lifelong model editing is the most challenging task that caters to the continuous editing requirements of LLMs. Prior works primarily focus on single or batch editing; nevertheless, thes... | http://arxiv.org/abs/2405.03279v2 | 2024-05-06T08:52:11 | cs.CL | 2,024 |
WorldQA: Multimodal World Knowledge in Videos through Long-Chain Reasoning | Yuanhan Zhang, Kaichen Zhang, Bo Li, Fanyi Pu, Christopher Arif Setiadharma, Jingkang Yang, Ziwei Liu | Multimodal information, together with our knowledge, help us to understand the complex and dynamic world. Large language models (LLM) and large multimodal models (LMM), however, still struggle to emulate this capability. In this paper, we present WorldQA, a video understanding dataset designed to push the boundaries of... | http://arxiv.org/abs/2405.03272v1 | 2024-05-06T08:42:34 | cs.CV | 2,024 |
MARE: Multi-Agents Collaboration Framework for Requirements Engineering | Dongming Jin, Zhi Jin, Xiaohong Chen, Chunhui Wang | Requirements Engineering (RE) is a critical phase in the software development process that generates requirements specifications from stakeholders' needs. Recently, deep learning techniques have been successful in several RE tasks. However, obtaining high-quality requirements specifications requires collaboration acros... | http://arxiv.org/abs/2405.03256v1 | 2024-05-06T08:24:55 | cs.SE | 2,024 |
Vietnamese AI Generated Text Detection | Quang-Dan Tran, Van-Quan Nguyen, Quang-Huy Pham, K. B. Thang Nguyen, Trong-Hop Do | In recent years, Large Language Models (LLMs) have become integrated into our daily lives, serving as invaluable assistants in completing tasks. Widely embraced by users, the abuse of LLMs is inevitable, particularly in using them to generate text content for various purposes, leading to difficulties in distinguishing ... | http://arxiv.org/abs/2405.03206v1 | 2024-05-06T07:12:22 | cs.CL, cs.AI | 2,024 |
Anchored Answers: Unravelling Positional Bias in GPT-2's Multiple-Choice Questions | Ruizhe Li, Yanjun Gao | Large Language Models (LLMs), such as the GPT-4 and LLaMA families, have demonstrated considerable success across diverse tasks, including multiple-choice questions (MCQs). However, these models exhibit a positional bias, particularly an even worse anchored bias in the GPT-2 family, where they consistently favour the f... | http://arxiv.org/abs/2405.03205v1 | 2024-05-06T07:10:09 | cs.CL, cs.AI, cs.LG | 2,024 |
sc-OTGM: Single-Cell Perturbation Modeling by Solving Optimal Mass Transport on the Manifold of Gaussian Mixtures | Andac Demir, Elizaveta Solovyeva, James Boylan, Mei Xiao, Fabrizio Serluca, Sebastian Hoersch, Jeremy Jenkins, Murthy Devarakonda, Bulent Kiziltan | Influenced by breakthroughs in LLMs, single-cell foundation models are emerging. While these models show successful performance in cell type clustering, phenotype classification, and gene perturbation response prediction, it remains to be seen if a simpler model could achieve comparable or better results, especially wi... | http://arxiv.org/abs/2405.03726v1 | 2024-05-06T06:46:11 | q-bio.GN, cs.LG | 2,024 |
Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines | Md Main Uddin Rony, Md Mahfuzul Haque, Mohammad Ali, Ahmed Shatil Alam, Naeemul Hassan | In the digital age, the prevalence of misleading news headlines poses a significant challenge to information integrity, necessitating robust detection mechanisms. This study explores the efficacy of Large Language Models (LLMs) in identifying misleading versus non-misleading news headlines. Utilizing a dataset of 60 ar... | http://arxiv.org/abs/2405.03153v1 | 2024-05-06T04:06:45 | cs.CL, cs.CY, cs.LG | 2,024 |
MMGER: Multi-modal and Multi-granularity Generative Error Correction with LLM for Joint Accent and Speech Recognition | Bingshen Mu, Yangze Li, Qijie Shao, Kun Wei, Xucheng Wan, Naijun Zheng, Huan Zhou, Lei Xie | Despite notable advancements in automatic speech recognition (ASR), performance tends to degrade when faced with adverse conditions. Generative error correction (GER) leverages the exceptional text comprehension capabilities of large language models (LLM), delivering impressive performance in ASR error correction, wher... | http://arxiv.org/abs/2405.03152v1 | 2024-05-06T04:05:19 | eess.AS, cs.SD | 2,024 |
CRAFT: Extracting and Tuning Cultural Instructions from the Wild | Bin Wang, Geyu Lin, Zhengyuan Liu, Chengwei Wei, Nancy F. Chen | Large language models (LLMs) have rapidly evolved as the foundation of various natural language processing (NLP) applications. Despite their wide use cases, their understanding of culturally-related concepts and reasoning remains limited. Meantime, there is a significant need to enhance these models' cultural reasoning... | http://arxiv.org/abs/2405.03138v1 | 2024-05-06T03:21:55 | cs.CL | 2,024 |
WDMoE: Wireless Distributed Large Language Models with Mixture of Experts | Nan Xue, Yaping Sun, Zhiyong Chen, Meixia Tao, Xiaodong Xu, Liang Qian, Shuguang Cui, Ping Zhang | Large Language Models (LLMs) have achieved significant success in various natural language processing tasks, but how wireless communications can support LLMs has not been extensively studied. In this paper, we propose a wireless distributed LLMs paradigm based on Mixture of Experts (MoE), named WDMoE, deploying LLMs co... | http://arxiv.org/abs/2405.03131v1 | 2024-05-06T02:55:50 | cs.IT, cs.AI, cs.LG, math.IT | 2,024 |
Compressing Long Context for Enhancing RAG with AMR-based Concept Distillation | Kaize Shi, Xueyao Sun, Qing Li, Guandong Xu | Large Language Models (LLMs) have made significant strides in information acquisition. However, their overreliance on potentially flawed parametric knowledge leads to hallucinations and inaccuracies, particularly when handling long-tail, domain-specific queries. Retrieval Augmented Generation (RAG) addresses this limit... | http://arxiv.org/abs/2405.03085v1 | 2024-05-06T00:18:43 | cs.CL | 2,024 |
Can Large Language Models Make the Grade? An Empirical Study Evaluating LLMs Ability to Mark Short Answer Questions in K-12 Education | Owen Henkel, Adam Boxer, Libby Hills, Bill Roberts | This paper presents reports on a series of experiments with a novel dataset evaluating how well Large Language Models (LLMs) can mark (i.e. grade) open text responses to short answer questions, Specifically, we explore how well different combinations of GPT version and prompt engineering strategies performed at marking... | http://arxiv.org/abs/2405.02985v1 | 2024-05-05T16:11:06 | cs.CL, cs.AI | 2,024 |
Agent Hospital: A Simulacrum of Hospital with Evolvable Medical Agents | Junkai Li, Siyu Wang, Meng Zhang, Weitao Li, Yunghwei Lai, Xinhui Kang, Weizhi Ma, Yang Liu | In this paper, we introduce a simulacrum of hospital called Agent Hospital that simulates the entire process of treating illness. All patients, nurses, and doctors are autonomous agents powered by large language models (LLMs). Our central goal is to enable a doctor agent to learn how to treat illness within the simulac... | http://arxiv.org/abs/2405.02957v1 | 2024-05-05T14:53:51 | cs.AI | 2,024 |
Unraveling the Dominance of Large Language Models Over Transformer Models for Bangla Natural Language Inference: A Comprehensive Study | Fatema Tuj Johora Faria, Mukaffi Bin Moin, Asif Iftekher Fahim, Pronay Debnath, Faisal Muhammad Shah | Natural Language Inference (NLI) is a cornerstone of Natural Language Processing (NLP), providing insights into the entailment relationships between text pairings. It is a critical component of Natural Language Understanding (NLU), demonstrating the ability to extract information from spoken or written interactions. NL... | http://arxiv.org/abs/2405.02937v2 | 2024-05-05T13:57:05 | cs.CL | 2,024 |
Overconfidence is Key: Verbalized Uncertainty Evaluation in Large Language and Vision-Language Models | Tobias Groot, Matias Valdenegro-Toro | Language and Vision-Language Models (LLMs/VLMs) have revolutionized the field of AI by their ability to generate human-like text and understand images, but ensuring their reliability is crucial. This paper aims to evaluate the ability of LLMs (GPT4, GPT-3.5, LLaMA2, and PaLM 2) and VLMs (GPT4V and Gemini Pro Vision) to... | http://arxiv.org/abs/2405.02917v1 | 2024-05-05T12:51:38 | cs.CV, cs.CL, cs.LG | 2,024 |
Improve Temporal Awareness of LLMs for Sequential Recommendation | Zhendong Chu, Zichao Wang, Ruiyi Zhang, Yangfeng Ji, Hongning Wang, Tong Sun | Large language models (LLMs) have demonstrated impressive zero-shot abilities in solving a wide range of general-purpose tasks. However, it is empirically found that LLMs fall short in recognizing and utilizing temporal information, rendering poor performance in tasks that require an understanding of sequential data, s... | http://arxiv.org/abs/2405.02778v1 | 2024-05-05T00:21:26 | cs.IR | 2,024 |
Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning | Che Guan, Andrew Chin, Puya Vahabi | With the deluge of information delivered by the daily news cycle, there is a growing need to effectively and efficiently summarize news feeds for quick consumption. We leverage large language models (LLMs), with their advanced learning and generative abilities as compared to conventional language models, to generate co... | http://arxiv.org/abs/2405.02710v1 | 2024-05-04T16:48:05 | cs.CL | 2,024 |
R4: Reinforced Retriever-Reorder-Responder for Retrieval-Augmented Large Language Models | Taolin Zhang, Dongyang Li, Qizhou Chen, Chengyu Wang, Longtao Huang, Hui Xue, Xiaofeng He, Jun Huang | Retrieval-augmented large language models (LLMs) leverage relevant content retrieved by information retrieval systems to generate correct responses, aiming to alleviate the hallucination problem. However, existing retriever-responder methods typically append relevant documents to the prompt of LLMs to perform text gene... | http://arxiv.org/abs/2405.02659v1 | 2024-05-04T12:59:10 | cs.CL | 2,024 |
Astro-NER -- Astronomy Named Entity Recognition: Is GPT a Good Domain Expert Annotator? | Julia Evans, Sameer Sadruddin, Jennifer D'Souza | In this study, we address one of the challenges of developing NER models for scholarly domains, namely the scarcity of suitable labeled data. We experiment with an approach using predictions from a fine-tuned LLM model to aid non-domain experts in annotating scientific entities within astronomy literature, with the goa... | http://arxiv.org/abs/2405.02602v1 | 2024-05-04T08:04:39 | cs.CL, cs.AI, cs.IT, math.IT | 2,024 |
REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs | Deepa Tilwani, Yash Saxena, Ali Mohammadi, Edward Raff, Amit Sheth, Srinivasan Parthasarathy, Manas Gaur | Automatic citation generation for sentences in a document or report is paramount for intelligence analysts, cybersecurity, news agencies, and education personnel. In this research, we investigate whether large language models (LLMs) are capable of generating references based on two forms of sentence queries: (a) Direct... | http://arxiv.org/abs/2405.02228v1 | 2024-05-03T16:38:51 | cs.CL, cs.AI, cs.IR | 2,024 |
FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems | Yashar Deldjoo | This paper presents a framework for evaluating fairness in recommender systems powered by Large Language Models (RecLLMs), addressing the need for a unified approach that spans various fairness dimensions including sensitivity to user attributes, intrinsic fairness, and discussions of fairness based on underlying benef... | http://arxiv.org/abs/2405.02219v1 | 2024-05-03T16:25:27 | cs.IR | 2,024 |
Assessing and Verifying Task Utility in LLM-Powered Applications | Negar Arabzadeh, Siging Huo, Nikhil Mehta, Qinqyun Wu, Chi Wang, Ahmed Awadallah, Charles L. A. Clarke, Julia Kiseleva | The rapid development of Large Language Models (LLMs) has led to a surge in applications that facilitate collaboration among multiple agents, assisting humans in their daily tasks. However, a significant gap remains in assessing to what extent LLM-powered applications genuinely enhance user experience and task executio... | http://arxiv.org/abs/2405.02178v1 | 2024-05-03T15:26:27 | cs.CL, cs.AI | 2,024 |
MedReadMe: A Systematic Study for Fine-grained Sentence Readability in Medical Domain | Chao Jiang, Wei Xu | Medical texts are notoriously challenging to read. Properly measuring their readability is the first step towards making them more accessible. In this paper, we present a systematic study on fine-grained readability measurements in the medical domain at both sentence-level and span-level. We introduce a new dataset Med... | http://arxiv.org/abs/2405.02144v1 | 2024-05-03T14:48:20 | cs.CL | 2,024 |
Unveiling the Potential of LLM-Based ASR on Chinese Open-Source Datasets | Xuelong Geng, Tianyi Xu, Kun Wei, Bingshen Mu, Hongfei Xue, He Wang, Yangze Li, Pengcheng Guo, Yuhang Dai, Longhao Li, Mingchen Shao, Lei Xie | Large Language Models (LLMs) have demonstrated unparalleled effectiveness in various NLP tasks, and integrating LLMs with automatic speech recognition (ASR) is becoming a mainstream paradigm. Building upon this momentum, our research delves into an in-depth examination of this paradigm on a large open-source Chinese da... | http://arxiv.org/abs/2405.02132v2 | 2024-05-03T14:35:58 | cs.SD, cs.CL, eess.AS | 2,024 |
Aloe: A Family of Fine-tuned Open Healthcare LLMs | Ashwin Kumar Gururajan, Enrique Lopez-Cuena, Jordi Bayarri-Planas, Adrian Tormos, Daniel Hinjos, Pablo Bernabeu-Perez, Anna Arias-Duart, Pablo Agustin Martin-Torres, Lucia Urcelay-Ganzabal, Marta Gonzalez-Mallo, Sergio Alvarez-Napagao, Eduard Ayguadé-Parra, Ulises Cortés Dario Garcia-Gasulla | As the capabilities of Large Language Models (LLMs) in healthcare and medicine continue to advance, there is a growing need for competitive open-source models that can safeguard public interest. With the increasing availability of highly competitive open base models, the impact of continued pre-training is increasingly... | http://arxiv.org/abs/2405.01886v1 | 2024-05-03T07:14:07 | cs.CL, cs.AI | 2,024 |
DALLMi: Domain Adaption for LLM-based Multi-label Classifier | Miruna Beţianu, Abele Mălan, Marco Aldinucci, Robert Birke, Lydia Chen | Large language models (LLMs) increasingly serve as the backbone for classifying text associated with distinct domains and simultaneously several labels (classes). When encountering domain shifts, e.g., classifier of movie reviews from IMDb to Rotten Tomatoes, adapting such an LLM-based multi-label classifier is challen... | http://arxiv.org/abs/2405.01883v1 | 2024-05-03T07:04:26 | cs.CL, cs.LG | 2,024 |
Incorporating External Knowledge and Goal Guidance for LLM-based Conversational Recommender Systems | Chuang Li, Yang Deng, Hengchang Hu, Min-Yen Kan, Haizhou Li | This paper aims to efficiently enable large language models (LLMs) to use external knowledge and goal guidance in conversational recommender system (CRS) tasks. Advanced LLMs (e.g., ChatGPT) are limited in domain-specific CRS tasks for 1) generating grounded responses with recommendation-oriented knowledge, or 2) proac... | http://arxiv.org/abs/2405.01868v1 | 2024-05-03T05:42:57 | cs.CL | 2,024 |
LLM as Dataset Analyst: Subpopulation Structure Discovery with Large Language Model | Yulin Luo, Ruichuan An, Bocheng Zou, Yiming Tang, Jiaming Liu, Shanghang Zhang | The distribution of subpopulations is an important property hidden within a dataset. Uncovering and analyzing the subpopulation distribution within datasets provides a comprehensive understanding of the datasets, standing as a powerful tool beneficial to various downstream tasks, including Dataset Subpopulation Organiz... | http://arxiv.org/abs/2405.02363v1 | 2024-05-03T05:09:54 | cs.CV, cs.CL | 2,024 |
A Survey of Time Series Foundation Models: Generalizing Time Series Representation with Large Language Model | Jiexia Ye, Weiqi Zhang, Ke Yi, Yongzi Yu, Ziyue Li, Jia Li, Fugee Tsung | Time series data are ubiquitous across various domains, making time series analysis critically important. Traditional time series models are task-specific, featuring singular functionality and limited generalization capacity. Recently, large language foundation models have unveiled their remarkable capabilities for cro... | http://arxiv.org/abs/2405.02358v2 | 2024-05-03T03:12:55 | cs.LG, cs.AI | 2,024 |
Improving Concept Alignment in Vision-Language Concept Bottleneck Models | Nithish Muthuchamy Selvaraj, Xiaobao Guo, Bingquan Shen, Adams Wai-Kin Kong, Alex Kot | Concept Bottleneck Models (CBM) map the input image to a high-level human-understandable concept space and then make class predictions based on these concepts. Recent approaches automate the construction of CBM by prompting Large Language Models (LLM) to generate text concepts and then use Vision Language Models (VLM) ... | http://arxiv.org/abs/2405.01825v1 | 2024-05-03T03:02:00 | cs.CV | 2,024 |
ALCM: Autonomous LLM-Augmented Causal Discovery Framework | Elahe Khatibi, Mahyar Abbasian, Zhongqi Yang, Iman Azimi, Amir M. Rahmani | To perform effective causal inference in high-dimensional datasets, initiating the process with causal discovery is imperative, wherein a causal graph is generated based on observational data. However, obtaining a complete and accurate causal graph poses a formidable challenge, recognized as an NP-hard problem. Recentl... | http://arxiv.org/abs/2405.01744v1 | 2024-05-02T21:27:45 | cs.LG, cs.AI, cs.CL, stat.ME | 2,024 |
Large Language Models are Inconsistent and Biased Evaluators | Rickard Stureborg, Dimitris Alikaniotis, Yoshi Suhara | The zero-shot capability of Large Language Models (LLMs) has enabled highly flexible, reference-free metrics for various tasks, making LLM evaluators common tools in NLP. However, the robustness of these LLM evaluators remains relatively understudied; existing work mainly pursued optimal performance in terms of correla... | http://arxiv.org/abs/2405.01724v1 | 2024-05-02T20:42:28 | cs.CL, cs.AI, 68T50 (Primary) 68T01, 68T37, 91F20 (Secondary), I.2; I.2.7; I.7 | 2,024 |
Automatically Extracting Numerical Results from Randomized Controlled Trials with Large Language Models | Hye Sun Yun, David Pogrebitskiy, Iain J. Marshall, Byron C. Wallace | Meta-analyses statistically aggregate the findings of different randomized controlled trials (RCTs) to assess treatment effectiveness. Because this yields robust estimates of treatment effectiveness, results from meta-analyses are considered the strongest form of evidence. However, rigorous evidence syntheses are time-... | http://arxiv.org/abs/2405.01686v1 | 2024-05-02T19:20:11 | cs.CL, cs.AI | 2,024 |
Leveraging Prompt-Learning for Structured Information Extraction from Crohn's Disease Radiology Reports in a Low-Resource Language | Liam Hazan, Gili Focht, Naama Gavrielov, Roi Reichart, Talar Hagopian, Mary-Louise C. Greer, Ruth Cytter Kuint, Dan Turner, Moti Freiman | Automatic conversion of free-text radiology reports into structured data using Natural Language Processing (NLP) techniques is crucial for analyzing diseases on a large scale. While effective for tasks in widely spoken languages like English, generative large language models (LLMs) typically underperform with less comm... | http://arxiv.org/abs/2405.01682v1 | 2024-05-02T19:11:54 | cs.CL, cs.AI | 2,024 |
Investigating Wit, Creativity, and Detectability of Large Language Models in Domain-Specific Writing Style Adaptation of Reddit's Showerthoughts | Tolga Buz, Benjamin Frost, Nikola Genchev, Moritz Schneider, Lucie-Aimée Kaffee, Gerard de Melo | Recent Large Language Models (LLMs) have shown the ability to generate content that is difficult or impossible to distinguish from human writing. We investigate the ability of differently-sized LLMs to replicate human writing style in short, creative texts in the domain of Showerthoughts, thoughts that may occur during... | http://arxiv.org/abs/2405.01660v1 | 2024-05-02T18:29:58 | cs.CL, cs.AI | 2,024 |
COPAL: Continual Pruning in Large Language Generative Models | Srikanth Malla, Joon Hee Choi, Chiho Choi | Adapting pre-trained large language models to different domains in natural language processing requires two key considerations: high computational demands and model's inability to continual adaptation. To simultaneously address both issues, this paper presents COPAL (COntinual Pruning in Adaptive Language settings), an... | http://arxiv.org/abs/2405.02347v1 | 2024-05-02T18:24:41 | cs.LG, cs.AI, cs.CL | 2,024 |
Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning | Tianle Xia, Liang Ding, Guojia Wan, Yibing Zhan, Bo Du, Dacheng Tao | Answering complex logical queries over incomplete knowledge graphs (KGs) is challenging. Most previous works have focused on learning entity/relation embeddings and simulating first-order logic operators with various neural networks. However, they are bottlenecked by the inability to share world knowledge to improve lo... | http://arxiv.org/abs/2405.01649v2 | 2024-05-02T18:12:08 | cs.CL | 2,024 |
OmniDrive: A Holistic LLM-Agent Framework for Autonomous Driving with 3D Perception, Reasoning and Planning | Shihao Wang, Zhiding Yu, Xiaohui Jiang, Shiyi Lan, Min Shi, Nadine Chang, Jan Kautz, Ying Li, Jose M. Alvarez | The advances in multimodal large language models (MLLMs) have led to growing interests in LLM-based autonomous driving agents to leverage their strong reasoning capabilities. However, capitalizing on MLLMs' strong reasoning capabilities for improved planning behavior is challenging since planning requires full 3D situa... | http://arxiv.org/abs/2405.01533v1 | 2024-05-02T17:59:24 | cs.CV | 2,024 |
Transformer-Aided Semantic Communications | Matin Mortaheb, Erciyes Karakaya, Mohammad A. Amir Khojastepour, Sennur Ulukus | The transformer structure employed in large language models (LLMs), as a specialized category of deep neural networks (DNNs) featuring attention mechanisms, stands out for their ability to identify and highlight the most relevant aspects of input data. Such a capability is particularly beneficial in addressing a variet... | http://arxiv.org/abs/2405.01521v1 | 2024-05-02T17:50:53 | cs.CV, cs.IT, cs.LG, eess.SP, math.IT | 2,024 |
Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving | Xin Quan, Marco Valentino, Louise A. Dennis, André Freitas | Natural language explanations have become a proxy for evaluating explainable and multi-step Natural Language Inference (NLI) models. However, assessing the validity of explanations for NLI is challenging as it typically involves the crowd-sourcing of apposite datasets, a process that is time-consuming and prone to logi... | http://arxiv.org/abs/2405.01379v2 | 2024-05-02T15:20:01 | cs.CL | 2,024 |
Overcoming LLM Challenges using RAG-Driven Precision in Coffee Leaf Disease Remediation | Dr. Selva Kumar S, Afifah Khan Mohammed Ajmal Khan, Imadh Ajaz Banday, Manikantha Gada, Vibha Venkatesh Shanbhag | This research introduces an innovative AI-driven precision agriculture system, leveraging YOLOv8 for disease identification and Retrieval Augmented Generation (RAG) for context-aware diagnosis. Focused on addressing the challenges of diseases affecting the coffee production sector in Karnataka, The system integrates so... | http://arxiv.org/abs/2405.01310v1 | 2024-05-02T14:19:25 | cs.IR, cs.CL | 2,024 |
The Effectiveness of LLMs as Annotators: A Comparative Overview and Empirical Analysis of Direct Representation | Maja Pavlovic, Massimo Poesio | Large Language Models (LLMs) have emerged as powerful support tools across various natural language tasks and a range of application domains. Recent studies focus on exploring their capabilities for data annotation. This paper provides a comparative overview of twelve studies investigating the potential of LLMs in labe... | http://arxiv.org/abs/2405.01299v1 | 2024-05-02T14:00:22 | cs.CL, cs.AI, cs.LG | 2,024 |
Efficient Data Generation for Source-grounded Information-seeking Dialogs: A Use Case for Meeting Transcripts | Lotem Golany, Filippo Galgani, Maya Mamo, Nimrod Parasol, Omer Vandsburger, Nadav Bar, Ido Dagan | Existing methods for creating source-grounded information-seeking dialog datasets are often costly and hard to implement due to their sole reliance on human annotators. We propose combining large language models (LLMs) prompting with human expertise for more efficient and reliable data generation. Instead of the labor-... | http://arxiv.org/abs/2405.01121v1 | 2024-05-02T09:35:06 | cs.CL, cs.AI | 2,024 |
Silencing the Risk, Not the Whistle: A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification | Dimitri Staufer, Frank Pallas, Bettina Berendt | Whistleblowing is essential for ensuring transparency and accountability in both public and private sectors. However, (potential) whistleblowers often fear or face retaliation, even when reporting anonymously. The specific content of their disclosures and their distinct writing style may re-identify them as the source.... | http://arxiv.org/abs/2405.01097v1 | 2024-05-02T08:52:29 | cs.CY, cs.CL, cs.HC, cs.IR, cs.SE, H.3; K.4; H.5; K.5; D.2; J.4 | 2,024 |
Learning Object States from Actions via Large Language Models | Masatoshi Tateno, Takuma Yagi, Ryosuke Furuta, Yoichi Sato | Temporally localizing the presence of object states in videos is crucial in understanding human activities beyond actions and objects. This task has suffered from a lack of training data due to object states' inherent ambiguity and variety. To avoid exhaustive annotation, learning from transcribed narrations in instruc... | http://arxiv.org/abs/2405.01090v1 | 2024-05-02T08:43:16 | cs.CV | 2,024 |
Context-Aware Clustering using Large Language Models | Sindhu Tipirneni, Ravinarayana Adkathimar, Nurendra Choudhary, Gaurush Hiranandani, Rana Ali Amjad, Vassilis N. Ioannidis, Changhe Yuan, Chandan K. Reddy | Despite the remarkable success of Large Language Models (LLMs) in text understanding and generation, their potential for text clustering tasks remains underexplored. We observed that powerful closed-source LLMs provide good quality clusterings of entity sets but are not scalable due to the massive compute power require... | http://arxiv.org/abs/2405.00988v1 | 2024-05-02T03:50:31 | cs.CL, cs.LG, I.2.7; I.2.m | 2,024 |
The Role of Model Architecture and Scale in Predicting Molecular Properties: Insights from Fine-Tuning RoBERTa, BART, and LLaMA | Lee Youngmin, Lang S. I. D. Andrew, Cai Duoduo, Wheat R. Stephen | This study introduces a systematic framework to compare the efficacy of Large Language Models (LLMs) for fine-tuning across various cheminformatics tasks. Employing a uniform training methodology, we assessed three well-known models-RoBERTa, BART, and LLaMA-on their ability to predict molecular properties using the Sim... | http://arxiv.org/abs/2405.00949v1 | 2024-05-02T02:20:12 | cs.LG, cs.CL, physics.chem-ph, q-bio.BM | 2,024 |
LLaVA Finds Free Lunch: Teaching Human Behavior Improves Content Understanding Abilities Of LLMs | Somesh Singh, Harini S I, Yaman K Singla, Veeky Baths, Rajiv Ratn Shah, Changyou Chen, Balaji Krishnamurthy | Communication is defined as ``Who says what to whom with what effect.'' A message from a communicator generates downstream receiver effects, also known as behavior. Receiver behavior, being a downstream effect of the message, carries rich signals about it. Even after carrying signals about the message, the behavior dat... | http://arxiv.org/abs/2405.00942v1 | 2024-05-02T02:04:01 | cs.CV, cs.CL | 2,024 |
Characterising the Creative Process in Humans and Large Language Models | Surabhi S. Nath, Peter Dayan, Claire Stevenson | Large language models appear quite creative, often performing on par with the average human on creative tasks. However, research on LLM creativity has focused solely on \textit{products}, with little attention on the creative \textit{process}. Process analyses of human creativity often require hand-coded categories or ... | http://arxiv.org/abs/2405.00899v1 | 2024-05-01T23:06:46 | cs.HC, cs.AI, cs.CL, q-bio.NC | 2,024 |
Efficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations | Kirandeep Kaur, Chirag Shah | Conventional recommendation systems (RSs) are typically optimized to enhance performance metrics uniformly across all training samples. This makes it hard for data-driven RSs to cater to a diverse set of users due to the varying properties of these users. The performance disparity among various populations can harm t... | http://arxiv.org/abs/2405.00824v1 | 2024-05-01T19:11:47 | cs.IR, cs.HC | 2,024 |
Self-Play Preference Optimization for Language Model Alignment | Yue Wu, Zhiqing Sun, Huizhuo Yuan, Kaixuan Ji, Yiming Yang, Quanquan Gu | Traditional reinforcement learning from human feedback (RLHF) approaches relying on parametric models like the Bradley-Terry model fall short in capturing the intransitivity and irrationality in human preferences. Recent advancements suggest that directly working with preference probabilities can yield a more accurate ... | http://arxiv.org/abs/2405.00675v1 | 2024-05-01T17:59:20 | cs.LG, cs.AI, cs.CL, stat.ML | 2,024 |
HalluVault: A Novel Logic Programming-aided Metamorphic Testing Framework for Detecting Fact-Conflicting Hallucinations in Large Language Models | Ningke Li, Yuekang Li, Yi Liu, Ling Shi, Kailong Wang, Haoyu Wang | Large language models (LLMs) have transformed the landscape of language processing, yet struggle with significant challenges in terms of security, privacy, and the generation of seemingly coherent but factually inaccurate outputs, commonly referred to as hallucinations. Among these challenges, one particularly pressing... | http://arxiv.org/abs/2405.00648v1 | 2024-05-01T17:24:42 | cs.SE | 2,024 |
Investigating Automatic Scoring and Feedback using Large Language Models | Gloria Ashiya Katuka, Alexander Gain, Yen-Yun Yu | Automatic grading and feedback have been long studied using traditional machine learning and deep learning techniques using language models. With the recent accessibility to high performing large language models (LLMs) like LLaMA-2, there is an opportunity to investigate the use of these LLMs for automatic grading and ... | http://arxiv.org/abs/2405.00602v1 | 2024-05-01T16:13:54 | cs.CL, cs.LG | 2,024 |
Long-Term Human Trajectory Prediction using 3D Dynamic Scene Graphs | Nicolas Gorlo, Lukas Schmid, Luca Carlone | We present a novel approach for long-term human trajectory prediction, which is essential for long-horizon robot planning in human-populated environments. State-of-the-art human trajectory prediction methods are limited by their focus on collision avoidance and short-term planning, and their inability to model complex ... | http://arxiv.org/abs/2405.00552v1 | 2024-05-01T14:50:58 | cs.RO, cs.HC | 2,024 |
BiomedRAG: A Retrieval Augmented Large Language Model for Biomedicine | Mingchen Li, Halil Kilicoglu, Hua Xu, Rui Zhang | Large Language Models (LLMs) have swiftly emerged as vital resources for different applications in the biomedical and healthcare domains; however, these models encounter issues such as generating inaccurate information or hallucinations. Retrieval-augmented generation provided a solution for these models to update know... | http://arxiv.org/abs/2405.00465v3 | 2024-05-01T12:01:39 | cs.CL | 2,024 |
CultiVerse: Towards Cross-Cultural Understanding for Paintings with Large Language Model | Wei Zhang, Wong Kam-Kwai, Biying Xu, Yiwen Ren, Yuhuai Li, Minfeng Zhu, Yingchaojie Feng, Wei Chen | The integration of new technology with cultural studies enhances our understanding of cultural heritage but often struggles to connect with diverse audiences. It is challenging to align personal interpretations with the intended meanings across different cultures. Our study investigates the important factors in appreci... | http://arxiv.org/abs/2405.00435v1 | 2024-05-01T10:35:08 | cs.HC | 2,024 |
A Careful Examination of Large Language Model Performance on Grade School Arithmetic | Hugh Zhang, Jeff Da, Dean Lee, Vaughn Robinson, Catherine Wu, Will Song, Tiffany Zhao, Pranav Raja, Dylan Slack, Qin Lyu, Sean Hendryx, Russell Kaplan, Michele Lunati, Summer Yue | Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning. However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ... | http://arxiv.org/abs/2405.00332v3 | 2024-05-01T05:52:05 | cs.CL, cs.AI, cs.LG | 2,024 |
DFKI-NLP at SemEval-2024 Task 2: Towards Robust LLMs Using Data Perturbations and MinMax Training | Bhuvanesh Verma, Lisa Raithel | The NLI4CT task at SemEval-2024 emphasizes the development of robust models for Natural Language Inference on Clinical Trial Reports (CTRs) using large language models (LLMs). This edition introduces interventions specifically targeting the numerical, vocabulary, and semantic aspects of CTRs. Our proposed system harnes... | http://arxiv.org/abs/2405.00321v1 | 2024-05-01T05:03:08 | cs.CL | 2,024 |
LITO: Learnable Intervention for Truthfulness Optimization | Farima Fatahi Bayat, Xin Liu, H. V. Jagadish, Lu Wang | Large language models (LLMs) can generate long-form and coherent text, but they still frequently hallucinate facts, thus limiting their reliability. To address this issue, inference-time methods that elicit truthful responses have been proposed by shifting LLM representations towards learned "truthful directions". Howe... | http://arxiv.org/abs/2405.00301v1 | 2024-05-01T03:50:09 | cs.CL | 2,024 |
Constrained Decoding for Secure Code Generation | Yanjun Fu, Ethan Baker, Yizheng Chen | Code Large Language Models (Code LLMs) have been increasingly used by developers to boost productivity, but they often generate vulnerable code. Thus, there is an urgent need to ensure that code generated by Code LLMs is correct and secure. Previous research has primarily focused on generating secure code, overlooking ... | http://arxiv.org/abs/2405.00218v1 | 2024-04-30T21:52:19 | cs.CR, cs.AI, cs.LG, cs.SE | 2,024 |
Graphical Reasoning: LLM-based Semi-Open Relation Extraction | Yicheng Tao, Yiqun Wang, Longju Bai | This paper presents a comprehensive exploration of relation extraction utilizing advanced language models, specifically Chain of Thought (CoT) and Graphical Reasoning (GRE) techniques. We demonstrate how leveraging in-context learning with GPT-3.5 can significantly enhance the extraction process, particularly through d... | http://arxiv.org/abs/2405.00216v1 | 2024-04-30T21:41:53 | cs.CL, cs.AI, cs.LG | 2,024 |
General Purpose Verification for Chain of Thought Prompting | Robert Vacareanu, Anurag Pratik, Evangelia Spiliopoulou, Zheng Qi, Giovanni Paolini, Neha Anna John, Jie Ma, Yassine Benajiba, Miguel Ballesteros | Many of the recent capabilities demonstrated by Large Language Models (LLMs) arise primarily from their ability to exploit contextual information. In this paper, we explore ways to improve reasoning capabilities of LLMs through (1) exploration of different chains of thought and (2) validation of the individual steps of... | http://arxiv.org/abs/2405.00204v1 | 2024-04-30T21:15:17 | cs.CL, cs.AI | 2,024 |
Uncovering What, Why and How: A Comprehensive Benchmark for Causation Understanding of Video Anomaly | Hang Du, Sicheng Zhang, Binzhu Xie, Guoshun Nan, Jiayang Zhang, Junrui Xu, Hangyu Liu, Sicong Leng, Jiangming Liu, Hehe Fan, Dajiu Huang, Jing Feng, Linli Chen, Can Zhang, Xuhuan Li, Hao Zhang, Jianhang Chen, Qimei Cui, Xiaofeng Tao | Video anomaly understanding (VAU) aims to automatically comprehend unusual occurrences in videos, thereby enabling various applications such as traffic surveillance and industrial manufacturing. While existing VAU benchmarks primarily concentrate on anomaly detection and localization, our focus is on more practicality,... | http://arxiv.org/abs/2405.00181v2 | 2024-04-30T20:11:49 | cs.CV, cs.AI | 2,024 |
Soft Preference Optimization: Aligning Language Models to Expert Distributions | Arsalan Sharifnassab, Sina Ghiassian, Saber Salehkaleybar, Surya Kanoria, Dale Schuurmans | We propose Soft Preference Optimization (SPO), a method for aligning generative models, such as Large Language Models (LLMs), with human preferences, without the need for a reward model. SPO optimizes model outputs directly over a preference dataset through a natural loss function that integrates preference loss with a... | http://arxiv.org/abs/2405.00747v1 | 2024-04-30T19:48:55 | cs.LG, cs.AI | 2,024 |
Visual Fact Checker: Enabling High-Fidelity Detailed Caption Generation | Yunhao Ge, Xiaohui Zeng, Jacob Samuel Huffman, Tsung-Yi Lin, Ming-Yu Liu, Yin Cui | Existing automatic captioning methods for visual content face challenges such as lack of detail, content hallucination, and poor instruction following. In this work, we propose VisualFactChecker (VFC), a flexible training-free pipeline that generates high-fidelity and detailed captions for both 2D images and 3D objects... | http://arxiv.org/abs/2404.19752v1 | 2024-04-30T17:55:27 | cs.CV | 2,024 |
When to Retrieve: Teaching LLMs to Utilize Information Retrieval Effectively | Tiziano Labruna, Jon Ander Campos, Gorka Azkune | In this paper, we demonstrate how Large Language Models (LLMs) can effectively learn to use an off-the-shelf information retrieval (IR) system specifically when additional context is required to answer a given question. Given the performance of IR systems, the optimal strategy for question answering does not always ent... | http://arxiv.org/abs/2404.19705v2 | 2024-04-30T16:52:55 | cs.CL, cs.IR | 2,024 |
RepEval: Effective Text Evaluation with LLM Representation | Shuqian Sheng, Yi Xu, Tianhang Zhang, Zanwei Shen, Luoyi Fu, Jiaxin Ding, Lei Zhou, Xinbing Wang, Chenghu Zhou | Automatic evaluation metrics for generated texts play an important role in the NLG field, especially with the rapid growth of LLMs. However, existing metrics are often limited to specific scenarios, making it challenging to meet the evaluation requirements of expanding LLM applications. Therefore, there is a demand for... | http://arxiv.org/abs/2404.19563v1 | 2024-04-30T13:50:55 | cs.CL | 2,024 |
Do Large Language Models Understand Conversational Implicature -- A case study with a chinese sitcom | Shisen Yue, Siyuan Song, Xinyuan Cheng, Hai Hu | Understanding the non-literal meaning of an utterance is critical for large language models (LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp, the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourced from dialogues in the Chinese sitcom $\t... | http://arxiv.org/abs/2404.19509v1 | 2024-04-30T12:43:53 | cs.CL, J.5 | 2,024 |
Neuro-Vision to Language: Image Reconstruction and Language enabled Interaction via Brain Recordings | Guobin Shen, Dongcheng Zhao, Xiang He, Linghao Feng, Yiting Dong, Jihang Wang, Qian Zhang, Yi Zeng | Decoding non-invasive brain recordings is crucial for advancing our understanding of human cognition, yet faces challenges from individual differences and complex neural signal representations. Traditional methods require custom models and extensive trials, and lack interpretability in visual reconstruction tasks. Our ... | http://arxiv.org/abs/2404.19438v2 | 2024-04-30T10:41:23 | cs.NE | 2,024 |
Improving LLM Classification of Logical Errors by Integrating Error Relationship into Prompts | Yanggyu Lee, Suchae Jeong, Jihie Kim | LLMs trained in the understanding of programming syntax are now providing effective assistance to developers and are being used in programming education such as in generation of coding problem examples or providing code explanations. A key aspect of programming education is understanding and dealing with error message.... | http://arxiv.org/abs/2404.19336v2 | 2024-04-30T08:03:22 | cs.AI, cs.PL | 2,024 |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 4